Category: Uncategorized

Modular Audio Synthesis on FPGAs With the High Level Synthesis Development Flow – Aman Jagwani

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Modular Audio Synthesis on FPGAs With the High Level Synthesis Development Flow - Aman Jagwani - ADCx India 2024

This talk will present a modular sound synthesis design approach for FPGAs using high level synthesis(HLS) tools. With their strong potential for high throughput, ultra-low latency and high sampling rates, field programmable gate arrays can be extremely beneficial for audio processing applications. Traditionally, FPGAs are highly complex to program, requiring specialized hardware design knowledge as well as knowledge of low level hardware description languages like VHDL and Verilog. HLS allows FPGAs to be programmed at a higher level of abstraction in languages such as C and C++, enabling greater accessibility and enhanced productivity.

However, despite HLS offering a higher level of abstraction, it still requires specialized approaches, especially when applied to audio applications. The programming flow generally entails the design of a top level function that gets translated into a standalone hardware Intellectual Property(IP) module with its arguments defining inputs and outputs. Within the function, along with processing algorithms, optimization directives or pragmas have to be used for behavior, communication and interface definition. The generated IP module has to then be integrated and deployed in a complete hardware design using a set of downstream FPGA design tools such as Vivado and Vitis in the case of the AMD/Xilinx FPGA environment. Within this context, this talk will introduce a set of inter-connectable sound synthesis HLS IP cores or modules, accompanied by a base audio system to exemplify the feasibility and design considerations of modular synthesis on FPGAs. The modules presented will include different types of oscillators, filters, envelope generators and other components of modular synths. Audio and video examples will also be presented.
_

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Prashant Mishra

#adc #dsp #audio #fpga

Filed under: UncategorizedTagged with: , ,

Real-Time Inference of Neural Networks: A Guide for DSP Engineers – Valentin Ackva & Fares Schulz

https://audio.dev/ -- @audiodevcon​

Real-time Inference of Neural Networks: A Practical Approach for DSP Engineers - Valentin Ackva & Fares Schulz - ADC 2023

In upcoming audio processing innovations the intersection of neural networks and real-time environments is set to play a decisive role. Our recent experience of implementing neural timbre transfer technology in a real-time setting has presented us with diverse challenges. Overcoming them has provided us with significant insights into the practicalities of inferencing neural networks inside an audio plugin.

This talk presents a pragmatic approach: Starting with a trained model, we guide you through the necessary steps for inferencing the model in a real-time environment. On our way we delve into the critical aspect of maintaining real-time safety, share proven strategies to ensure a seamless and uninterrupted signal flow. Moreover, we address the delicate balance between latency, performance, and stability. For this we utilize three different inference engines: libtorch, tensorflow-lite and onnxruntime. While the in-house solutions for the popular machine learning frameworks PyTorch and TensorFlow, seem obvious choices, sometimes other engines may be better suited for certain use cases. By contrasting the characteristics of the engines, we hope to simplify your decision-making process.
_

Valentin Ackva

I am an audio programmer and electronic musician based in Berlin. With a background in computer science, I'm currently working towards my master's degree in Audio Communication and Technology at the Technische Universität Berlin. My passion lies at the intersection of music, programming, and technology, especially where artistry meets innovation. For the last 3 years, I have been working as an audio software developer at a speech processing startup in Leipzig. At my position there, I am responsible for the development of audio effects for speech enhancement. This role includes research into the real-time implementation of state-of-the-art neural networks for tasks such as denoising, audio super-resolution, and dereverberation. This year, I have co-founded a collective that combines the fields of DSP and AI, bringing together a group of audio programmers, machine learning engineers, and artists based in Berlin. In March, we released our first software, "Scyclone", an audio plugin that utilizes neural timbre transfer technology, introducing a new approach to automatic layering. Scyclone's innovative design and interaction of DSP and AI led to it winning the Audio Plugin Competition organised by the Audio Programmer.
_

Fares Schulz

Hello! I am a student assistant at the Electronic Studio of Technische Universität Berlin, currently pursuing a master's degree in Audio Communication and Technology. My educational background includes two bachelor's degrees in physics and audio engineering. During this time, my passion for audio software gradually led me from theoretical mathematical equations and abstract artistic concepts to their development as DSP algorithms in Python and their implementation as real-time audio applications in C++. Currently, I am particularly interested in exploring novel applications of neural networks for digital signal processing. Together with like-minded people, I recently developed the open source project Scyclone, which won the Neural Audio Plugin Competition organized by Audio Programmer. In addition to my academic and open source endeavors, I am actively involved in the development of noise reduction algorithms in Python and C++ for medical devices at Miethke.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #dsp #audio

Filed under: UncategorizedTagged with: , , ,

Build a High Performance Audio App With a Web GUI & C++ Audio Engine – Colin Sullivan & Kevin Dixon

https://audio.dev/ -- @audiodevcon​

Building a High Performance Audio Application With a Web GUI and C++ Audio Engine - Colin Sullivan & Kevin Dixon - ADC23

The era of using web UIs for audio applications is just beginning. How might we build a high performance audio application on the foundations of the JUCE web component ? How might we overcome some limitations of the JUCE web component, such as passing chunks of binary data to the GUI? How might we deal with the complexities of this dual-sided system written in two different languages? We have developed a solution for a high-performance application architecture including a C++ audio engine and a web GUI.

Both the C++ audio engine and web UI implement their own unidirectional data flow, and combine to form an application wide unidirectional data flow, allowing the GUI to send actions into the C++ application to initiate state changes. We will discuss tooling developed for sharing data types between the two languages without error-prone manual maintenance, as well as the communication protocol itself and how we overcame limitations by intercepting HTTP requests from the webview in the C++ application.

We will discuss the performance considerations of integrating the unidirectional data flow architecture with a real-time audio engine and the high-performance architecture of the Web GUI itself.

Link to Slides: https://data.audio.dev/building-a-physical-experience-for-virtual-instruments/slides.pdf
_

Colin Sullivan

Colin Sullivan is a software developer building interactive systems with web technologies and C++. Colin has operated as a software engineer across product lifecycles from incubation & product prototyping to product release in multinational companies and startups. Colin has built interactive installations and performed improvisational electronic music using bespoke generative music software.
_

Kevin Dixon

I've been building consumer and professional music applications for both desktop and mobile platforms since 2008. Originally started work on a streaming video platform for public safety, I was immediately exposed to the issues of communicating between a high-performance C++ engine and with a garbage collected language (C#) for the GUI. After cutting my teeth on mobile development with the first Android G1 handset, I moved squarely into the audio industry. At Audyssey Labs, I was a founding member of their mobile team, and was responsible for integrating DSP technology into any OS or Firmware our client demanded, most notably the Android OS itself. I was a lead developer on a consumer hifi listening app for iOS and was awarded a patent (US9860641B2) for delivering audio output device specific audio processing from the cloud. From there, my mobile experience led me to Native Instruments, where I contributed heavily to both of Native Instruments mobile offerings, iMaschine and Traktor DJ, always with a focus on the audio engine and application architecture. Spending time in a product-focused environment taught me to be efficient with what makes it into the codebase, while still balancing overall quality (decisions in codebases always last longer than you think!). After brief, yet productive, stints with Fender and Dolby, I landed at Output and have been focusing on audio engine and application architecture for the past two years.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #dsp #cppprogramming

Filed under: UncategorizedTagged with: , , , , , ,

Procedural Sound Design Applications – From Embedded to Games – Aaron Myles Pereira ADCx India 2024

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Procedural Sound Design Applications - From Embedded to Games - Aaron Myles Pereira - ADCx India 2024

Procedural Sound Design, defined by its adaptability and real-time generation capabilities, offers newfound possibilities in crafting immersive and dynamic auditory experiences. In this presentation, we will explore its key principles, benefits, and applications across two diverse domains: vehicles and video games.
_

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Prashant Mishra

#adc #audio #sounddesign #embedded

Filed under: UncategorizedTagged with: , , , , ,

Embedded Software Development – A Wild Ride! – Matt Speed & Andy Normington – ADC23

https://audio.dev/ -- @audiodevcon​

Embedded Software Development - A Wild Ride! - Matt Speed & Andy Normington - ADC 2023

Embedded software development (aka firmware) can be challenging, but it's incredibly rewarding. Sitting at the beating heart of all audio hardware products, it handles the UI, connects the physical and virtual, and transforms signals and sounds to bring the product to life.

Join us to hear about how it connects with the electronics, software development and QA worlds, and the fun you can have bringing hardware products to life!!

Link to Slides: https://data.audio.dev/talks/2023/embedded-wild-ride/slides.pdf
_

Matt Speed

Matt has worked at Novation for 8 years, helping to develop a range of synths and controllers. He loves DSP, mathematics and music, and has an unhealthy(?) passion for the 56k.

Andy Normington

I work with embedded software on new Focusrite and Novation products. Coming from a background in music and audio and having started my career more focused on electronics, I enjoy learning about the way the systems we develop work in the real world - it's always a little different to what I expect! I'm always on the lookout for ways to do things a little better than last time, which at work drives my interest in teams and tooling (and at home drives my ever-increasing stash of DIY tools and gadgets). I'm at ADC for the first time in 2023, and really looking forward to learning lots and meeting people.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #embeddedsoftware #audio

Filed under: UncategorizedTagged with: , , , , ,

Leveraging Pruning & Quantization for Real-Time Audio Applications – Dharanipathi Rathna Kumar

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Leveraging Pruning and Quantization for Efficient Real-Time Audio Applications - Dharanipathi Rathna Kumar - ADCx India 2024

In the constantly changing world of audio processing, the demand for real-time response and high-quality output is a relentless challenge, especially in the presence of computational constraints. The rapid growth in model complexity, especially in deep learning, has made it challenging to deploy complex models involving architectures such as TCN, LSTM, etc. on resource-constrained devices and/or to achieve fast inference speeds for tasks such as real-time audio effects, audio style transfer, and source separation. Model compression is vital to address these challenges, making it possible to retain high performance while using fewer computational resources. This talk delves deep into two paramount model compression techniques, namely pruning, and quantization, and explores their applicability in the context of audio applications.

Pruning is a method of eliminating redundant or less contributive weights from the model to reduce the computation resources required to run the network. We'll explore its variants, methodologies, and outcomes, and how it can drastically reduce computational complexity without significantly undermining model performance. Quantization is the process of reducing the precision of the weights, biases, and activations such that they consume less memory. By reducing the bit-width of model parameters, we can achieve sizeable savings in memory and computational power, making it indispensable for on-device audio applications for real-time audio contexts.

In this presentation, I elucidate that by adopting strategic weight pruning and parameter quantization, it is feasible to significantly enhance the efficiency of sophisticated audio models, paving the way for robust, real-time audio processing even in resource-constrained environments.

Link to Slides: https://data.audio.dev/talks/ADCxIndia/2024/pruning-and-quantization-for-efficient-real-time-audio-applications.pdf
_

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Prashant Mishra

#adc #audiodev #dsp #audio #audio

Filed under: UncategorizedTagged with: , , ,

Real-time Confessions in C++ – Fabian Renn-Giles – ADC23

https://audio.dev/ -- @audiodevcon​
_

Real-time Confessions in C++: The Most Common “Sins” in Real-Time Code - Fabian Renn-Giles - ADC 2023

This talk examines the most prevailing misconceptions and frequent errors encountered when audio developers handle real-time code in C++. With my background as a contractor in the audio industry, I’m often called in to help fix subtle bugs in, or review code with real-time constraints. Yet, I see (and have myself made) the same type of mistakes over and over again resulting from a few common misconceptions of real-time C++ code.This talk offers an in-depth analysis of each of these misconceptions, debunking them with compelling examples from the audio industry.

Ranging from the ignorance of C++'s data-safety rules altogether, to the overuse of std::atomic and misbeliefs about the forbidden usage of locks and exceptions in real-time code, this presentation navigates the landscape between the theoretical rules of the C++ standard and real-world practical realities. This talk is an essential guide for developers seeking to avoid common pitfalls and write more efficient, reliable real-time code.
_

Link to Slides: https://docs.google.com/presentation/d/1rJNebpRg3xJO4AiE5AntrkEKIXRta25DCOxHxDMzC0c
_

Fabian Renn-Giles

Fabian is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Before this, he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands of commercial audio software companies. Before joining ROLI, he completed his PhD at Imperial College London, developing a numerical quantum optics solver with modern digital signal processing techniques and C++/MPI/OpenCL. Fabian is now a regular consultant specializing on low-level real-time C++, embedded audio, time sensitive networks, audio over WiFi and audio plug-in formats. His clients range from early startups to FAANG companies. Additionally, he is a regular speaker at the audio developer conference ADC and other C++ conferences.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #cppprogramming #audio

Filed under: UncategorizedTagged with: , , , , ,

RADSan: A Realtime-Safety Sanitizer – David Trevelyan & Ali Barker – ADC23

https://audio.dev/ -- @audiodevcon​

RADSan: A Realtime-Safety Sanitizer - David Trevelyan & Ali Barker - ADC23

"ERROR: RealtimeSanitizer: call to malloc detected during execution of realtime function SketchyAlgorithm::process!"

We present RADSan, a realtime-safety sanitizer integrated into the LLVM project. Activated with a single argument to clang, RADSan allows developers to mark any function with a [[realtime]] attribute. At run time, realtime functions will error if RADSan detects activity that it knows is not realtime-safe.

Our talk will include:
• an introduction to sanitizers; what they do and how they work,
• an exploration of the realtime-safety testing problem space; what is (and what isn't) possible, and how this influenced RADSan's design,
• a deep-dive into the components of RADSan and how they work together
• a demonstration of how to use RADSan to mark functions as realtime and test a system's realtime safety
• an open discussion on how to best share this idea with the wider community
_

David Trevelyan
_

Ali Barker
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #cppprogramming #audio

Filed under: UncategorizedTagged with: , , , , ,

Generative Audio: Indian Art, Computer Music – georgepanicker@mastodon.social – ADCx India 2024

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Generative Audio: Indian art, Computer music - George Panicker - ADCx India 2024

Generative music and audiovisuals have been the crux of the new media and music technology since time immemorial. The field has its legends and deities such as John Cage, Moog and Brian Eno. Yet the nature of procedural music and the "audio-visual" as an artform has an ancient history in the Indian subcontinent and its artistic foundations. 

In this talk, New Media artist and Creative Technologist, George Panicker, will take the audience through a tour of the surprising connections between  generative audio and Indian music and art theory, providing some interesting ideas for the future of indigenous instrumentation and music technology.

Link to Slides: https://data.audio.dev/talks/ADCxIndia/2024/indian-roots-of-generative-music.pdf
_

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Prashant Mishra

#adc #dsp #audio #generativemusic

Filed under: UncategorizedTagged with: , , , ,

KEYNOTE: NIME to NISE: Rethinking the Design & Evaluation of Musical Interfaces – Anna Xambó Sedó

https://audio.dev/ -- @audiodevcon​

KEYNOTE: From NIME to NISE: Rethinking the Design and Evaluation of Musical Interfaces - Anna Xambó Sedó - ADC23

In this keynote talk, Anna will present her insights into designing and evaluating networked algorithmic spaces that support collaboration, participation, non-hierarchical structures and Do-it-yourself (DIY) practices for Sound and Music Computing (SMC) from a Human-Computer Interaction (HCI) perspective. She will dissect the term New Interfaces for Musical Expression (NIME) to propose instead the design and evaluation of 'NISEs', to accommodate the wider world of Sound-based music and art so that we can embrace other prototypes and systems that take into consideration other characteristics than 'new', 'interfaces/interactions', 'music' and 'expression'. We will look into a project that repurposes NIMEs as an educational tool; an AI-enhanced live-coding self-built environment that works as a customisable sampler of crowdsourced sounds; and a project related to creating natural NIMEs as interventions to the environment through sonic arts and sonification. The projects contribute to the vision of democratising SMC through the creation of technologies that can empower the community to solve real-world problems.
_

Websites:

* NIME website: https://www.nime.org
* NIME proceedings: https://www.nime.org/archives/
* NIME diversity: https://diversity.nime.org | https://www.nime.org/diversity/
* WoNoMute: https://wonomute.github.io
* NISE game: https://annaxambo.me/nise-game/
* Sam Topley: https://www.sam-topley.co.uk
* Dirty Electronics: https://dirtyelectronics.org
* Bed of Nails circuit: https://dirtyelectronics.org/docs/Bed_of_Nails.zip
* MIRLCa: https://mirlca.dmu.ac.uk
* Hydra: https://hydra.ojack.xyz
* Female Laptop Orchestra: https://femalelaptoporchestra.wordpress.com
* Sensing the Forest: https://sensingtheforest.github.io
_

Audios:

* og01 by peterMann (audio extract): https://carpal-tunnel.bandcamp.com/track/og01
_

Videos:

* Different Similar Sounds: A Live Coding Evening "From Scratch": https://youtu.be/lDVsawECK2Y
* Ceci n’est pas une usine @ +RAIN Rehearsal: https://vimeo.com/838940013
_

Publications:

* Jensenius, Alexander Refsum & Lyons, Michael J. (eds.) (2017). A NIME Reader: Fifteen Years of NIME. Springer. DOI: 10.1007/978-3-319-47214-0
https://stanford.io/4c9TiRi | https://bit.ly/3TuXELn

* Bowers, John & Archer, Phil (2005). Not Hyper, Not Meta, Not Cyber but Infra-Instruments. Proceedings of the International Conference on NIME. DOI: 10.5281/zenodo.1176713
https://www.nime.org/proc/bowers2005/

* Kori, Lisa & Novak, David (2020). “Handmade Sound Communities” in Collins, Nicolas (2020) Handmade Electronic Music, Routledge. DOI: 10.4324/9780429264818
https://www.handmadeelectronicmusic.com

* Landy, Leigh (2007). Understanding the Art of Sound Organisation. MIT Press. DOI: 10.7551/mitpress/7472.001.0001

* Hayes, Lauren & Marquez-Borbon, Adnan (2020). Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises. Proceedings of the International Conference on NIME. DOI: 10.5281/zenodo.4813459
https://www.nime.org/proc/nime20_83/

* Born, Georgina (2020). Diversifying MIR: Knowledge and Real-World Challenges, and New Interdisciplinary Futures. Transactions of the ISMIR, 3(1), pp. 193–204.
https://bit.ly/3VfFdM6

* Bardzell, Shaowen (2010). "Feminist HCI: Taking Stock and Outlining an Agenda for Design" In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1301-1310.
https://bit.ly/3wMkUMa

* Jawad, Karolina & Xambó, Anna (2020). How to Talk of Music Technology: An Interview Analysis Study of Live Interfaces for Music Performance among Expert Women. In: Proceedings of ICLI (ICLI 2020), pp. 41-47.
https://bit.ly/439DmdR

* Masu, Raul; Morreale, Fabio & Jensenius, Alexander Refsum (2023). The O in NIME: Reflecting on the Importance of Reusing and Repurposing Old Musical Instruments. Proceedings of the International Conference on NIME.
https://bit.ly/3TFKBXX

* Weetman, Catherine (2016). A Circular Economy Handbook for Business and Supply Chains: Repair, Remake, Redesign, Rethink. London: Kogan Page.

* Koren, Leonard (2008). Wabi-Sabi for Artists, Designers, Poets & Philosophers. Imperfect Publishing.

* Xambó, Anna; Saue, Sigurd; Jensenius, Alexander Refsum; Støckert, Robin & Brandtsegg, Oeyvind (2019). NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing. Proceedings of the International Conference on NIME. DOI: 10.5281/zenodo.3672932
https://www.nime.org/proc/xambo2019/

* Xambó, Anna (2023). Discovering Creative Commons Sounds in Live Coding. Organised Sound, 28(2), 276-289. DOI:10.1017/S1355771823000262
https://bit.ly/48XkXCo
_

Anna Xambó Sedó
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

#adc #aiaudio #dsp #audio #hci

Filed under: UncategorizedTagged with: , , ,