Conference Year: 2024

What the World Would Look Like if Plugin Instances Could Simply Talk to Each Other

End of 2021 we launched smart:EQ 3 which featured an Inter-Plugin-Communication approach for the first time that allows finding other instances of the plugin in the project and define groups of plugins. End of 2023 smart:EQ 4 was released which took the concept even further and allows to fully remote control each instance of smart:EQ 4 in the project from the Editor of any instance.

Based on the experience collected during the implementation of smart:EQ 3 we started creating an entire in-house abstraction framework focused on the idea of Inter-Plugin-Instance-Communication, which became the backbone of our most recent plugin releases and revealed its full potential with the release of smart:EQ 4.

As of today, Inter-Plugin-Instance-Communication is no built-in feature to any of the popular plugin formats. So we had no chance but implementing our own workarounds, which make great things possible, but also come with their downside. The goal of this talk is to start an open discussion with the industry about how this could be integrated into some kind of extension or companion API to existing plugin format standards. During the talk, we want to reflect our journey of how we made inter-plugin-communication and plugin group management work, including the challenge of instance discovery and abstraction of data channels, why a strict processor/view separation helped us a lot making all the functionality work and how we recently started to extend the framework to build a proof-of-concept ARA plugin with it.

We also want to talk about the challenges involved, which are reliable detection of plugin instances in a project and management of an plugin group state with plugin APIs that only allow saving per-plugin state information. Since we believe that great plugins could be made if Inter-Plugin-Instance-Communication was easier, we want to present a proposal of how an Inter-Plugin-Instance-Communication API could work on a high level from our point of view and hope for a lively discussion at the conference. Host developers are especially welcome!

Filed under: Uncategorized

Atmospheric Plugin Design

A headfirst swandive into designing plugins to evoke atmospheres and emotional responses, and the creative potential that approach can unlock, through the lens of interpreting Dante Aligheri's Inferno as an a multifx plugin.

This talk hopes to inspire and encourage experimentation, and celebrate and nurture the often-overlooked creative side of audio programming, rather than focussing on pre-established effects, while also touching on the decisions, ups and downs, and actual development process for "Inferno".

Filed under: Uncategorized

Learning While Building

The “build, measure, learn” loop and the importance of iterating on “working software” brought to us by the eXtreme Programming (XP) and Agile Software Development movements is now an essential part of 21st century software development. The ability to iterate, learn, and pivot becomes even more valuable in fast moving industries like digital audio. We can increase our pace of learning and iteration by not only embracing these techniques from XP and Agile by bringing the learning process earlier before you begin to write “working software.”

Will people even want what I’m building? A discussion of Lean Software Development will illuminate ways a Minimal Viable Product (MVP) can be built to get external feedback from the market while writing little—or hopefully no—code. After covering general examples of MVPs, we will look at some fantasy audio plug-in MVP examples as well discuss how MVPs were used to test some currently shipping audio plug-ins.

There is also internal learnings to be gained about the thing we are building: enter the world of prototyping. We will discuss the differences between MVPs and prototypes, how to use both for learning, and about different types of software and non-software prototyping. After a tour of example prototypes, we will look at some prototyping tools useful for digital audio: starting from one of the oldest tools on the block Max (or now Max4Live) to some of the latest community built tools for prototyping machine learning within the DAW.

This tour of user research, MVPs, and prototyping will show many ways to learn along your software development process, but at the end of our journey we will see that there are learnings about an idea that can only be obtained by combining physical gesture with audio feedback.

Filed under: Uncategorized

An Introductory Guide to Virtual Analog Modelling

Audio circuits, such as guitar pedals and amplifiers, process input signals to output signals with specific audio effects. The process of replicating the electrical behaviour of these circuits in a digital environment is known as Virtual Analog (VA) modelling. This session aims to provide a basic understanding of VA modelling and explore various methods to achieve it. Finally, using these methods as a foundation, we will look at modelling the circuit of the MXR Distortion+ guitar pedal that emulates the circuit’s behaviour.

 

 

Filed under: Uncategorized

Keynote: Sonic Cartography

From the chemotaxis of single-celled organisms to human exploration of the solar system, it seems that terrestrial brains are exquisitely tuned for navigating in 3-d space. There is evidence that we use our navigation brains, by way of conceptual metaphors, to reason about even the most abstract concepts. But what about when that space is infinite? How do we navigate within a virtual space that does not even exist until we generate it? This is the domain that audio developers inhabit and explore on a routine basis. And we’ve developed strategies that have analogs in biological evolution, anthropology, physics and assembly theory. Audio developers are the under-appreciated protagonists of this story (but there is a happy ending).

Filed under: Uncategorized

Keynote: Foundation Models Don’t Understand Me

Living in a world of rapidly accelerating synthetic media, the outputs of generative AI often leave us feeling frustrated, amused, and even manipulated. Early examples of creative AI tools struggle to go beyond imitating styles and patterns, producing a context-less blend of borrowed aesthetics from the datasets they’re trained on. This race to the statistically average flattened aesthetic, misunderstands the core goals of creative expression. In contrast, Audio developers and Instrument builders understand the importance of providing a toolkit of exploration, intentional serendipity and discovery to a new age of artists performing with AI. In my work of building AI musical instruments for live Opera, Symphony and Installations, I share a framework of frustrations and guidelines for AI instrument building.

Filed under: Uncategorized

Knee-Deep Learning

Dive in and start creating!

Dive into the basics of machine learning for audio and start creating with a few practical steps.

This talk is aimed at developers without prior experience in machine learning who want to get inspired and equipped with the knowledge to start their own projects. The purpose is to provide a practical introduction to the topic in order to demystify theory and overcome implementation complexities.

Whether you're looking to solve complex problems where traditional DSP methods fall short or conjure up unthinkable sounds, this session is for you.

We dive right in, using simple and free tools to acquire data, set up code to create an ML training and inference pipeline, explore training techniques, and analyze and evaluate the results as we go. We cover what hardware is needed for training at different scales, ranging from cloud computing to consumer GPUs.

We'll cover basic theory, a brief history of different approaches, and, in particular, practical advice on getting started: data requirements, data acquisition, training, hardware needs, and deployment, including options for on-device real-time inference, embedded systems, and cloud-based SaaS.

Throughout, simple example model architectures suitable for beginners are used.

After training and analyzing some simple models, we explore different deployment options, including cloud-based inference, on-device native code using popular inference frameworks, and dedicated embedded hardware modules.

Filed under: Uncategorized

Auditory and Cognitive Neuroscience and the State of Audio Technology

As Dr Susan Rogers says, auditory science and brain science attract similar kinds of thinkers — those who are ok with imagining the mechanism and process. One of the most fascinating areas of neuroscience is the emerging theoretical framework known as ‘predictive processing’ which describes the brain as essentially a prediction machine which is a combination of ideas from psychology or neuroscience, machine learning and information theory. Fundamentally, every technology we develop in the audio industry is an interface with the brain. In recent years, advances in understanding how the brain works give us opportunities to discover novel approaches to audio interfaces, which should lead to the design of new products. This session will inform us better, as audio technology developers, about what kinds of tools can we develop that harness the brain’s abilities to advance the fields of music technology, audio engineering, live sound, music education and more.

Speaking is Dr Susan Rogers herself, sound engineer for Prince in the 1980s; with a doctorate in pyschology and a focus on music cognition and psychoacoustics, she is currently associate professor in the Music Production and Engineering and Liberal Arts departments at Berklee College of Music. We also welcome Dr Micha Heilbron who works is at the intersection of cognitive science and AI and is assistant professor in cognitive AI and computational cognitive (neuro)science at the University of Amsterdam. We are joined by cognitive computational neuroscientist Dr Ryszard Auksztulewicz whose is interested in prediction, memory, learning, attention, and their disorders.

Indicative themes include:

  • Prediction
  • Signal processing
  • Compression
  • Generative models
  • Music preference

The format will be a panel with an in-person moderator with the online experts in their expansive fields including auditory and cognitive neuroscience, psychoacoustics, AI & ML, and music and sensory processing.

Filed under: Uncategorized

Amplifying Efficiency

Have a great audio product (or idea for an audio product) but don’t know how to turn it into a business? Want to start your own audio plugin company but have no idea what that even means? Already have a product but don’t know how to get momentum in the market? Learn from our mistakes!

We acquired decades of experience in internal business tools & systems, financial analysis, automation, and analytics before leaving our 9-to-5 jobs to start a new audio venture. Even with all that background, we’ve had to learn a lot in the last two years, and have a ton of information to share with you.

This talk focuses on what we believe is one of the keys to our success: Infrastructure. It’s an often-overlooked (and frankly, avoided) aspect of running a business that many people think they can’t afford to spend time on. We think you can’t afford not to.

Filed under: Uncategorized

A Spatial Audio Primer

Hearing the only sense that allows us to localise sources from any direction, but how? How does our spatial hearing system work and how can we trick it, how do we define a 3d soundscape? How did we get here? All that and more answered in this talk for all experience levels

Filed under: Uncategorized