Tag: AI

Real-Time Inference of Neural Networks: A Guide for DSP Engineers – Part II – Fares Schulz & Valentin Ackva – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Real-Time Inference of Neural Networks: A Guide for DSP Engineers - Part II - Fares Schulz & Valentin Ackva - ADC 2024
---

Continuing our exploration of implementing neural network inference for real-time audio applications, we have expanded from our initial plugin example to a comprehensive library that simplifies the deployment and integration of neural networks in audio applications.

In this talk, we discuss various aspects of our implementation. Since it is crucial to know whether inference engines exhibit real-time violations, we first quantify real-time violations within inference executions. Subsequently, we explore the integration of these engines in real-time audio environments, specifically addressing the challenges of running multiple instances simultaneously. To accomplish this, we use a static thread pool and, when available, host-provided threads. We also focus on strategies for achieving the lowest possible latency, presenting techniques we have implemented, and open a dialogue on a controversial approach to further reduce latency. Moreover, we share our findings on the performance impact of various factors on inference runtimes. To this end, we have extensively benchmarked different neural network architectures across different inference engines, and can show how differences in input buffer size and model size, as well as previously executed inferences, affect the overall performance.
---

Slides: https://data.audio.dev/talks/2024/real-time-inference-of-neural-networks/slides.pdf
---

Fares Schulz

I am a researcher in the Computer Music and Sound Synthesis Team, part of the Audio Communication Group at the Technische Universität Berlin. At present, my particular interest lies in the exploration of novel applications of neural networks for creative audio effects and synthesis, especially in the real-time and mixed-signal domains. Considering neural networks as a tool rather than a one-size-fits-all solution, I am researching how to make them available alongside long-established methods such as DSP algorithms and analog circuitry. I am also working on spatial audio (multi) systems, which require clustered audio servers due to their high computational cost.

Currently in the final stages of my Master's degree in Audio Communication and Technology, my educational background includes two Bachelor's degrees in Physics and Audio Engineering. Throughout this time, it is my passion for electronic music production that has taken me from theoretical mathematical equations and abstract artistic concepts to their development as algorithms and analog circuits. I am always looking for new ways to combine my interests in music, technology, and science, and love to chat with others who share these passions.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #neuralnetworks #deeplearning #multithreading #dsp #audio #audioprocessing #audioproduction #audioprogramming #inference #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , , ,

Building Audio Applications using AI Models: Practical Lessons Learned – ChangHun Sung – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Building Audio Applications using AI Models: Practical Lessons Learned - ChangHun Sung - ADC 2024
---

Supertone has been building and releasing a variety of AI audio applications since 2022. Ranging from a real-time noise reduction plug-in, a standalone voice changer application, to an impulse response generator that employs a transformer architecture (the kind used in LLMs), Supertone’s products were made by taking AI models from the lab and delivering them to the hands of users. In this talk, we will share with you some practical lessons we’ve learned from our experiences.
---

Slides: https://data.audio.dev/talks/2024/practical-lessons-learned-from-building-audio-applications-using-ai-models/slides.pdf
---

ChangHun Sung

Chang Hun is a Software Engineer at Supertone, Inc. He previously worked in the game industry and used to develop game engines. Chang Hun now develops high performance C++ frameworks to accelerate the process of productizing ML models. He is also the principal clarinetist in an amateur orchestra in Seoul.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #ai #audiosoftware #audiodev #dsp #audio #conferenceaudio #audioprocessing #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Practical Steps to Get Started with Audio Machine Learning – Martin Swanholm – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Knee-Deep Learning - Practical Steps to Get Started with Audio Machine Learning - Martin Swanholm - ADC 2024
---

Dive in and start creating!

Dive into the basics of machine learning for audio and start creating with a few practical steps.

This talk is aimed at developers without prior experience in machine learning who want to get inspired and equipped with the knowledge to start their own projects. The purpose is to provide a practical introduction to the topic in order to demystify theory and overcome implementation complexities.

Whether you're looking to solve complex problems where traditional DSP methods fall short or conjure up unthinkable sounds, this session is for you.

We dive right in, using simple and free tools to acquire data, set up code to create an ML training and inference pipeline, explore training techniques, and analyze and evaluate the results as we go. We cover what hardware is needed for training at different scales, ranging from cloud computing to consumer GPUs.

We'll cover basic theory, a brief history of different approaches, and, in particular, practical advice on getting started: data requirements, data acquisition, training, hardware needs, and deployment, including options for on-device real-time inference, embedded systems, and cloud-based SaaS.

Throughout, simple example model architectures suitable for beginners are used.

After training and analyzing some simple models, we explore different deployment options, including cloud-based inference, on-device native code using popular inference frameworks, and dedicated embedded hardware modules.
---

Slides: https://data.audio.dev/talks/2024/knee-deep-learning/slides.pdf
---

Martin Swanholm

Martin is a software developer and DSP engineer with over 30 years of experience, currently focusing on practical, real-world applications of machine learning in audio. His work emphasizes getting the most out of available hardware and compute resources, ensuring solutions are efficient and accessible to a wide range of users. He is currently developing effective tools for audio restoration, like phase-coherent frequency-domain models and multi-task learning models that improve speech off-line or interactively in real time.

Martin’s journey in digital audio began in the 1990s, and over the years, he’s worked on everything from basic signal processing to full multimedia systems. His approach is rooted in pragmatism—using techniques that work, whether simple or advanced, to solve real problems.

Martin excels at breaking down complex concepts into clear, actionable steps, making his presentations valuable for beginners looking to understand audio processing with machine learning. He’s committed to showing how practical, tried-and-true methods can yield strong results without requiring cutting-edge hardware or expertise, making his sessions approachable for all skill levels.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#machinelearning #ai #adc #audiodev #dsp #audio #conferenceaudio #audioprocessing #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

How AI Audio Apps Help Break the Communication Barrier for India’s Deaf Community – ADCx India 2025

https://audio.dev/ -- @audiodevcon​
---

How AI Audio Apps Help Break the Communication Barrier for India's Deaf Community - Gopikrishnan S, Bharat Shetty - ADCx India 2025
---

How India can leverage GenAI and Audio technologies like live transcription, text to speech, LLM based auto completions to help deaf community to communicate better. We are demonstrating project Sunva, a seamless communication tool for deaf Indians built using STT, TTS and AI

The speaker will talk about
1. Current state of the art of Sunva prototype and how it is going to change the way deaf people communicate in India.
2. How are using speech to text, text to speech and GenAI for building Sunva
3. How technologies like voice cloning, speech recognition for non standard speech...etc are going to revolutionize accessibility in India.
---

Slides: https://data.audio.dev/talks/ADCxIndia/2025/breaking-the-communication-barrier-for-indias-deaf-community/slides.pptx
---

Gopikrishnan S

Gopi is an AI engineer who is passionate about building societal edge AI use cases for India solving challenges faced by people in India. He is a full-time AI consultant at people+ai now working on building AI use cases in accessibility domain.
---

Bharat Shetty

Experienced Engineer with a demonstrated history of working across various domains in different roles. Skilled in Python, Java, NLP, Cloud and Data platforms and software, Data Structures and Algorithms, and Linux. Strong engineering professional with a Master's degree from the University of Florida and a Bachelors of Engineering focused in Computer Science and Engineering from Sri Jayachamarajendra College of Engineering.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCx India Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#aiaudio #ai #aiapp #aiapplications #deafcommunity #adc #audiodev #dsp #audio #audioprocessing #audioprogramming #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

scipy.cpp – Using AI to Port Python’s scipy.signal Filter-Related Functions to C++ for Real Time Use – Julius Smith – ADCx Gather 2024

https://audio.dev/ -- @audiodevcon​
---

scipy.cpp - Using AI to Port Python's scipy.signal Filter-Related Functions to C++ for Use in Real Time - Julius Smith - ADC 2024
---

This is a progress report on the following evolving chatbot workflow:

1. "Translate this Python function and its helpers to C++ .h and .cpp files, converting any doc strings to C++ Doxygen comments: ..." [Claude 3.5 Sonnet is hard to beat]
2. "Generate a progressive sequence of unit tests in Catch2 format"[ChatGPT-o1 can be amazing]

In general, Python translates smoothly to C++. The chatbots are especially strong in knowing API details, command-line options, and modern C++ idioms. The biggest pitfall seems to be complex algebraic manipulations.

When one-shot inference fails (ChatGPT-4o, Claude 3.5 Sonnet), o1 should be tried.
---

Julius Smith

Julius O. Smith is a research engineer, educator, and musician devoted primarily to developing new technologies for music and audio signal processing. He received the B.S.E.E. degree from Rice University in 1975 (Control, Circuits, and Communication), and the M.S. and Ph.D. degrees in E.E. from Stanford University, in 1978 and 1983, respectively. For his MS/EE, he focused largely on statistical signal processing. His Ph.D. research was devoted to improved methods for digital filter design and system identification applied to music and audio systems, particularly the violin. From 1975 to 1977 he worked in the Signal Processing Department at ESL, Sunnyvale, CA, on systems for digital communications. From 1982 to 1986 he was with the Adaptive Systems Department at Systems Control Technology, Palo Alto, CA, where he worked in the areas of adaptive filtering and spectral estimation. From 1986 to 1991 he was employed at NeXT Computer, Inc., responsible for sound, music, and signal processing software for the NeXT computer workstation. After NeXT, he became a Professor at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford, with a courtesy appointment in EE, teaching courses and pursuing/supervising research related to signal processing techniques applied to music and audio systems. At varying part-time levels, he was a founding consultant for Staccato Systems, Shazam Inc., and moForte Inc. He is presently a Professor Emeritus of Music and by courtesy Electrical Engineering at Stanford, and a perennial consultant for moForte Inc. and a few others. For more information, see https//ccrma.stanford.edu/~jos/.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCxGather Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#scipy #python #ai #adc #audiodev #dsp #audio #conferenceaudio #audioprocessing #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Responsible AI for Offline Plugins – Tamper-Resistant Neural Audio Watermarking – Kanru Hua – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Responsible AI for Offline Plugins - Tamper-Resistant Neural Audio Watermarking - Kanru Hua - ADC 2024
---

In the era of rapid generative AI advancements, how do audio plugin developers harness the power of deep learning, while safeguarding against misuses and ensuring regulatory compliance? Existing audio watermarking algorithms are designed to run on the cloud, and when deployed offline, are trivial to bypass when faced against experienced hackers. In this talk, I introduce an innovative algorithm developed for Vocoflex, our AI powered voice transformation plugin. Instead of first generating the voice and then adding the watermark in a post-processing step, our approach involves an end-to-end trained neural network that generates voices with inaudible watermarks already embedded. Furthermore, the neural network resists tampering by 'self-destructing' to contaminate outputs upon malicious perturbation of the network weights. We will share our methodology and provide guidelines for designing similar, but not identical watermarking schemes that tightly couple program logic with watermarks, resilient to both audio manipulations and reverse engineering attempts.
---

Slides: https://data.audio.dev/talks/2024/responsible-ai-for-offline-plugins/slides.pdf
---

Kanru Hua

In short and above everything else, I make vocal synthesizers! I am so into making the best sounding vocal synthesizer that it took me 13 years to self-teach all the maths and engineering and coding. I now run Tokyo-based audio software manufacturer Dreamtonics.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#ai #plugins #plugin #adc #audiodev #dsp #audio #conferenceaudio #audioprocessing #audioproduction #audioprogramming #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Roland’s Holistic Approach to AI for Music Creation – Paul McCabe, Ichiro Yazawa & Kazuyoshi Sasamori – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Roland’s Holistic Approach to AI for Music Creation - Paul McCabe, Ichiro Yazawa & Kazuyoshi Sasamori - ADC 2024
---

This presentation introduces Roland's approach to AI for music creation and will include a technical demonstration of a new AI-powered application. Roland will also overview its holistic strategy for AI which considers R&D, policy, and governance mechanisms. As a general-purpose disruptive technology, Artificial Intelligence is in the process of impacting virtually every aspect of life, including creativity. As a leading music technology innovator, Roland is highly optimistic about the potential of AI to enhance and empower human music making but is deeply aware of the need to innovate responsibly.
---

Slides: https://data.audio.dev/talks/2024/rolands-holistic-approach-to-ai-for-music-creation/slides.pdf
---

Paul McCabe

Paul McCabe is the SVP of Research and Innovation and leader of Roland Future Design Lab, Roland’s horizon-scanning R&D group. In more than 30 years with Roland, Paul has held positions in product management, marketing, and served as the President & CEO of Roland Canada before relocating from Vancouver to Los Angeles with his family in 2017. As the leader of Roland Future Design Lab (RFDL), Paul oversees a growing team of engineers and technologists exploring the frontiers of emerging tech, trends, and shifts in creative culture around the world. Paul is a family man, composer, sound designer, and lover of science fiction and ice hockey.
---

Ichiro Yazawa

Ichiro Yazawa has worked in product development for music production and connectivity technology development such as USB, IEEE1394, Ethernet Audio, Bluetooth, Wi-Fi and MIDI 2.0. Through supporting development of Audio, MIDI, the USB layer, and standard drivers for major operating systems, I have had many friends all over the world who are full of irreplaceable ideas and skills.

In recent years, I have been responsible for a wide range of research and development including AI, Web3, IoT, and Cyber Security. I have been meeting friends with more diverse talents and following paths to new possibilities.

Personally, I enjoy listening to ambient music and playing keyboard on weekends.
---

Kazuyoshi Sasamori

I studied signal processing, information processing, and computer architecture at university, and have been working as an engineer at Roland for many years since then. I have been involved in the development of pianos, drums, and synthesizers, focusing on hardware design and sound system design. I have also spent time at BOSS team, working on guitar products.
Some of my memorable projects include the speaker system for the V-Piano Grand and the noise reduction for V-Drums. In recent years, I have served as a Product Manager, planning and developing synthesizers such as the JUNO-DS, RD-2000, and FANTOM/FANTOM-0. Following that, I took on the role of technology development in the R&D department, and I am currently leading research and development as an AI Product Manager at the Roland Future Design Lab, with a focus on the utilization and implementation of AI. The future looks incredibly exciting. Personally, I am an avid fan of synthesizers and rhythm machines, and I love being surrounded by a plethora of hardware—sometimes I feel like my body is made of electronic sounds. However, I also appreciate Jazz and Acoustic music. In my free time, I enjoy outdoor activities and spending time with my dog.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#roland #ai #adc #audiodev #audio #conferenceaudio #audioprocessing #audioproduction #audioprogramming #music #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Teaching Audio Developers How to Build AI-Enhanced Audio Plugins – Matthew Yee-King – ADCx Gather 2024

https://audio.dev/ -- @audiodevcon​
---

Teaching Audio Developers How to Build AI-Enhanced Audio Plugins - Matthew Yee-King - ADC 2024
---

In this talk I will reflect on my experience as an educator, developer and musician working with AI and music technology. I have used many technologies, from Java applets to SuperCollider, Web Audio API and onto C++ and JUCE. I will then describe an educationally focused workflow for the development of AI-enhanced audio plugins that I am currently using for teaching and research/ development work. The workflow utilises C++, JUCE, CMake, PyTorch and RTNeural and has been designed to support educators, students and developers who wish to learn about the integration of AI and machine learning technology into real-time audio applications. I will also present various example plugins I have built using the workflow: a MIDI improviser, a neural network synthesizer controller and neural effects unit.
---

Matthew Yee-King

Matthew Yee-King is a professor in the Department of Computing at Goldsmiths, University of London, and served as a visiting professor at the Politecnico di Milano in 2024. He is the programme director for the UoL Worldwide Computer Science Programme, launched in 2019 as the first undergraduate programme on the Coursera platform, which has since attracted nearly 10,000 students from 125 countries. His research focuses on education technology, particularly AI-enhanced systems, and the application of AI to digital signal processing and music performance. In 2024, his book "Build AI-enhanced Audio Plugins with C++" was published by Routledge. Additionally, Matthew is an accomplished musician who has performed, recorded, and collaborated with internationally renowned jazz and electronic musicians such as Aphex Twin, Tom Skinner (Smile band), Matthew Herbert, and Squarepusher.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon
https://twitter.com/audiodevcon
https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCxGather Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#ai #audioplugins #adc #audiodev #dsp #audio #conferenceaudio #audioprocessing #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Interfaces are King! – A Practical Look at AI Audio Tools and What Audio Professionals Actually Need – Andrew Scheps

https://audio.dev/ -- @audiodevcon​
---

Keynote: Interfaces are King! - A Practical Look at AI Audio Tools and What Audio Professionals Actually Need - Andrew Scheps - ADC 2024
---

Technology in audio software has progressed immensely in the last few years, especially in the use of LLM AI in audio processing and stem separation. This talk will focus on the current uses of AI in audio tools; a practical look at what professionals actually need; and, possibly most importantly, how the technology should be presented to non-technical users in the industry. There will also be discussion of the necessity for quality data sets as well as some of the legal ramifications of using them.
---

Andrew Scheps

Andrew Scheps has engineered and mixed some of the biggest bands in the world: Green Day, Red Hot Chili Peppers, Weezer, Audioslave, Black Sabbath, Metallica, Linkin Park, Hozier, Kaleo and U2. He’s worked with legends such as Johnny Cash, Neil Diamond and Iggy Pop, as well as indie artists such as Low Roar, My Brightest Diamond, Motorpsycho, Rancid, Manu Chao, Tinariwen, Farao, CRX, Cass McCombs and Blood Red Shoes.

Andrew has been awarded Grammys for his work with Adele, Red Hot Chili Peppers and Ziggy Marley, nominated for Best Engineered Album, Classical for his work on The Blue Hour as well as garnering over 20 nominations in categories such as Best Rock Song, Best Rock Album and Best Hard Rock Performance. In addition, Andrew was named International Engineer of the Year Award by the UK’s Music Producers Guild.

In 2011 Andrew started Tonequake Records, a boutique indie label with acts including Low Roar, Favez, Fellow Robot, Act Rights, and Jeff Babko.

After 25 years in Los Angeles, Andrew relocated to the UK in July 2015 where he set up his own mix room which is now equipped for Dolby Atmos Music. He has collaborated on several plugins with Waves and develops custom software for controlling Pro Tools to make his life easier.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCxGather Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #ai #aiaudio #audiodev #dsp #audio #audioprocessing #audioproduction #audioprogramming #music #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Workshop: Practical Machine Learning – Embed a Generative AI Model in Your App – by @dynamic_cast – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Workshop: Practical Machine Learning - Embed a generative AI model in your app and train your own interactions with it - Anna Wszeborowska, Harriet Drury, Sohyun Im, Julia Läger & Pauline Nemchak - ADC 2024
---

In this workshop we’ll explore the fundamentals of Machine Learning. We will run through an easy to follow machine learning model that will:

Be easy for beginners
Run on the CPU
Be real time

This will cover an intro to Machine Learning, small vs large models and an introduction to a training environment in python. We aim to make this workshop as interactive as possible, with the idea of having a trained model in session for everyone to use/play with.

This will be a self-contained workshop aiming to be accessible to all levels of learning - all elements used in the practical part of the workshop will be thoroughly explained in the introduction.
Dynamic Cast - Who Are We?
Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). The Dynamic Cast workshop at ADC is designed to create an entry point to the industry for newcomers, everyone is welcome.
---

Slides: https://data.audio.dev/talks/2024/practical-machine-learning/slides.pdf
---

Anna Wszeborowska

Anna is an independent software developer and consultant working on interactive real-time systems for music and audio. She is also an academic researcher focusing on exploring strategies for aiding musical self-expression with machine learning. During her time at Ableton she held the role of Technical Principal for Max for Live, worked on the hardware instrument Push and contributed to some of the flagship instruments available in the DAW Live and iOS app Note. Anna has founded programming initiatives helping people underrepresented in tech advance in the field. Currently co-organises a peer-to-peer C++ study group called Dynamic Cast.
---

Harriet Drury

Harriet is a Software Engineer at Native Instruments, working on iZotope branded products. She has a keen interest in DSP and ML, having written a proof of concept inference engine in Cmajor. Most recent work in ML has been on real time applications of large libraries.

Plays guitar (occasionally), can hit drums sometimes on time. Harriet co-organises Dynamic Cast, a C++ learning group for underrepresented groups. There are chapters in Berlin and London, with the option to join online, too.
---

Sohyun Im

Sohyun Im graduated with a Bachelor's degree in Sound Engineering from the University of West London and is currently pursuing her Master's in Sound and Music Computing at Queen Mary University of London.

She has a keen interest in audio programming and DSP, having conducted research on Virtual Analog Modelling, which bridges the analog and digital realms, during her undergraduate studies. Additionally, she is deeply interested in the emerging field of generative music AI and is dedicated to advancing her knowledge in this area.

Sohyun is also a lifelong pianist. Regardless of the genre, feel free to invite her whenever you need a pianist.
---

Julia Läger

Julia is a Software Developer with 7+ years experience writing C++ production code, working previously in automotive and now in music tech at Focusrite. But she also really likes Python. She's currently working on internal tooling, which involves a potpourri of domains and technologies, going from high-level desktop applications down to embedded libraries. She's passionate about music and science, and actually has a background in experimental nano physics.
---

Pauline Nemchak

A front-end engineer, music and audio industries enthusiast and linguaphile (aren't we all).
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#machinelearning #machinelearningapplications #ai #adc #audiodev #audio #machinelearningwithpython #audioprocessing #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , , ,