[Teaching people how to] Build AI-enhanced audio plugins
In this talk I will reflect on my experience as an educator, developer and musician working with AI and music technology. I have used many technologies, from Java applets to SuperCollider, Web Audio API and onto C++ and JUCE. I will then describe an educationally focused workflow for the development of AI-enhanced audio plugins that I am currently using for teaching and research/ development work. The workflow utilises C++, JUCE, CMake, PyTorch and RTNeural and has been designed to support educators, students and developers who wish to learn about the integration of AI and machine learning technology into real-time audio applications. I will also present various example plugins I have built using the workflow: a MIDI improviser, a neural network synthesizer controller and neural effects unit.
Matthew Yee-King
Professor
Goldsmiths
Matthew Yee-King is a professor in the Department of Computing at Goldsmiths, University of London, and served as a visiting professor at the Politecnico di Milano in 2024. He is the programme director for the UoL Worldwide Computer Science Programme, launched in 2019 as the first undergraduate programme on the Coursera platform, which has since attracted nearly 10,000 students from 125 countries. His research focuses on education technology, particularly AI-enhanced systems, and the application of AI to digital signal processing and music performance. In 2024, his book "Build AI-enhanced Audio Plugins with C++" was published by Routledge. Additionally, Matthew is an accomplished musician who has performed, recorded, and collaborated with internationally renowned jazz and electronic musicians such as Aphex Twin, Tom Skinner (Smile band), Matthew Herbert, and Squarepusher.