Tag: AI

Interfaces are King! – A Practical Look at AI Audio Tools and What Audio Professionals Actually Need – Andrew Scheps

https://audio.dev/ -- @audiodevcon​
---

Keynote: Interfaces are King! - A Practical Look at AI Audio Tools and What Audio Professionals Actually Need - Andrew Scheps - ADC 2024
---

Technology in audio software has progressed immensely in the last few years, especially in the use of LLM AI in audio processing and stem separation. This talk will focus on the current uses of AI in audio tools; a practical look at what professionals actually need; and, possibly most importantly, how the technology should be presented to non-technical users in the industry. There will also be discussion of the necessity for quality data sets as well as some of the legal ramifications of using them.
---

Andrew Scheps

Andrew Scheps has engineered and mixed some of the biggest bands in the world: Green Day, Red Hot Chili Peppers, Weezer, Audioslave, Black Sabbath, Metallica, Linkin Park, Hozier, Kaleo and U2. He’s worked with legends such as Johnny Cash, Neil Diamond and Iggy Pop, as well as indie artists such as Low Roar, My Brightest Diamond, Motorpsycho, Rancid, Manu Chao, Tinariwen, Farao, CRX, Cass McCombs and Blood Red Shoes.

Andrew has been awarded Grammys for his work with Adele, Red Hot Chili Peppers and Ziggy Marley, nominated for Best Engineered Album, Classical for his work on The Blue Hour as well as garnering over 20 nominations in categories such as Best Rock Song, Best Rock Album and Best Hard Rock Performance. In addition, Andrew was named International Engineer of the Year Award by the UK’s Music Producers Guild.

In 2011 Andrew started Tonequake Records, a boutique indie label with acts including Low Roar, Favez, Fellow Robot, Act Rights, and Jeff Babko.

After 25 years in Los Angeles, Andrew relocated to the UK in July 2015 where he set up his own mix room which is now equipped for Dolby Atmos Music. He has collaborated on several plugins with Waves and develops custom software for controlling Pro Tools to make his life easier.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCxGather Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #ai #aiaudio #audiodev #dsp #audio #audioprocessing #audioproduction #audioprogramming #music #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Workshop: Practical Machine Learning – Embed a Generative AI Model in Your App – by @dynamic_cast – ADC 2024

https://audio.dev/ -- @audiodevcon​
---

Workshop: Practical Machine Learning - Embed a generative AI model in your app and train your own interactions with it - Anna Wszeborowska, Harriet Drury, Sohyun Im, Julia Läger & Pauline Nemchak - ADC 2024
---

In this workshop we’ll explore the fundamentals of Machine Learning. We will run through an easy to follow machine learning model that will:

Be easy for beginners
Run on the CPU
Be real time

This will cover an intro to Machine Learning, small vs large models and an introduction to a training environment in python. We aim to make this workshop as interactive as possible, with the idea of having a trained model in session for everyone to use/play with.

This will be a self-contained workshop aiming to be accessible to all levels of learning - all elements used in the practical part of the workshop will be thoroughly explained in the introduction.
Dynamic Cast - Who Are We?
Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). The Dynamic Cast workshop at ADC is designed to create an entry point to the industry for newcomers, everyone is welcome.
---

Slides: https://data.audio.dev/talks/2024/practical-machine-learning/slides.pdf
---

Anna Wszeborowska

Anna is an independent software developer and consultant working on interactive real-time systems for music and audio. She is also an academic researcher focusing on exploring strategies for aiding musical self-expression with machine learning. During her time at Ableton she held the role of Technical Principal for Max for Live, worked on the hardware instrument Push and contributed to some of the flagship instruments available in the DAW Live and iOS app Note. Anna has founded programming initiatives helping people underrepresented in tech advance in the field. Currently co-organises a peer-to-peer C++ study group called Dynamic Cast.
---

Harriet Drury

Harriet is a Software Engineer at Native Instruments, working on iZotope branded products. She has a keen interest in DSP and ML, having written a proof of concept inference engine in Cmajor. Most recent work in ML has been on real time applications of large libraries.

Plays guitar (occasionally), can hit drums sometimes on time. Harriet co-organises Dynamic Cast, a C++ learning group for underrepresented groups. There are chapters in Berlin and London, with the option to join online, too.
---

Sohyun Im

Sohyun Im graduated with a Bachelor's degree in Sound Engineering from the University of West London and is currently pursuing her Master's in Sound and Music Computing at Queen Mary University of London.

She has a keen interest in audio programming and DSP, having conducted research on Virtual Analog Modelling, which bridges the analog and digital realms, during her undergraduate studies. Additionally, she is deeply interested in the emerging field of generative music AI and is dedicated to advancing her knowledge in this area.

Sohyun is also a lifelong pianist. Regardless of the genre, feel free to invite her whenever you need a pianist.
---

Julia Läger

Julia is a Software Developer with 7+ years experience writing C++ production code, working previously in automotive and now in music tech at Focusrite. But she also really likes Python. She's currently working on internal tooling, which involves a potpourri of domains and technologies, going from high-level desktop applications down to embedded libraries. She's passionate about music and science, and actually has a background in experimental nano physics.
---

Pauline Nemchak

A front-end engineer, music and audio industries enthusiast and linguaphile (aren't we all).
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#machinelearning #machinelearningapplications #ai #adc #audiodev #audio #machinelearningwithpython #audioprocessing #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , , ,

GPU Based Audio Processing Platform with AI Audio Effects – Simon Schneider – ADCxGather 2024

https://audio.dev/ -- @audiodevcon​
---

GPU Based Audio Processing Platform with AI Audio Effects - Are GPUs ready for real-time processing in live sound engineering? - Simon Schneider - ADCxGather 2024
---

Parallelising real-time audio effects requires complex task management and synchronisation. GPUs are optimised for parallel processing while still retaining flexible scheduling comparable to a CPU. As a result, it combines the strengths of both DSPs and CPUs in a single device. In addition, recent trends advocate using AI audio processing algorithms, which work best on GPU architectures.

This thesis presents an implementation of an embedded GPU-based audio processing framework on an Nvidia Jetson hardware platform. It can combine neural network inference and other audio effects into signal graphs that process within periods as small as 32 frames (0.667ms).

The signal graph does not restrict the number and combination of parallel and serial audio effects as long as the real-time limit is met. Therefore, the framework has been tested on large numbers of parallel channels, as found in a mixing console, and complex routing options available in high-end audio effect processors, such as the Neural DSP Quad Cortex.

Launching GPU work using the CUDA graph API produces better stability and performance than was observed using the CUDA stream API in a 2017 study. Processing a signal graph that fully utilises the Jetson's resources by mimicking a 64-channel mixing console on a 128-frame (2.67ms) period has a higher than 99% success rate. However, occasional stalling on the GPU can produce worst-case execution times of up to 20ms, regardless of the loaded audio effects. As a result, the framework can not yet be classified as real-time capable.
Further study of the CUDA scheduler and improvements to the operating system and audio driver may be able to achieve real-time capability in the future.
---

Simon Schneider

Simon is a musician based in Winterthur. After completing his Bachelor's Degree in IT he realized that he could combine his passion for music and software engineering in audio development. As a result, he went on to complete a Master's Degree in Electrical Engineering to learn about signal processing and hardware programming. During this time he was working part-time in the ZHAW Institute of Embedded Systems. There he developed an audio effects processing platform on an Nvidia Jetson GPU as part of his Master Thesis.

His bandmates will say, that playing the guitar is just an excuse to justify all the money he spends on music-related gear. While this is not entirely untrue, Simon is the type of programmer who spends days automating or streamlining a repetitive process that takes minutes to complete manually.
As a result, automating guitar effect changes or building a wireless InEar monitoring rack for shorter live show setup and teardown times simply requires a lot of gear.

Furthermore, Simon has a flair for modular and customizable software architecture. Since audio gear often lacks both of these qualities, he endeavours to make them a primary focus in all his audio development projects.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCxGather Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#audioprocessing #gpu #ai #adc #audiodev #dsp #audio #conferenceaudio #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , , ,

KEYNOTE: Foundation Models Don’t Understand Me – Lessons From AI Lutherie for Live Performances – Manaswi Mishra

https://audio.dev/ -- @audiodevcon​
---

Keynote: Foundation Models Don’t Understand Me - Lessons From AI Lutherie for Live Performances - Manaswi Mishra - ADC 2024
---

Living in a world of rapidly accelerating synthetic media, the outputs of generative AI often leave us feeling frustrated, amused, and even manipulated. Early examples of creative AI tools struggle to go beyond imitating styles and patterns, producing a context-less blend of borrowed aesthetics from the datasets they’re trained on. This race to the statistically average flattened aesthetic, misunderstands the core goals of creative expression. In contrast, Audio developers and Instrument builders understand the importance of providing a toolkit of exploration, intentional serendipity and discovery to a new age of artists performing with AI. In my work of building AI musical instruments for live Opera, Symphony and Installations, I share a framework of frustrations and guidelines for AI instrument building.
---

Slides: https://data.audio.dev/talks/2024/foundation-models-dont-understand-me/slides.pdf
---

Manaswi Mishra

Manaswi Mishra is a LEGO Pappert Fellow and a current PhD research assistant in the Opera of the Future group, MIT Media Lab. His research explores strategies and frameworks for a new age of composing, performing and learning music using A.I. centered on bespoke human intent. He joined the MIT Media Lab in 2019 and completed his MS in Media Arts and Science, developing his work “Living, Singing A.I”, to empower bespoke A.I. music making with just the human voice. His AI instruments have been featured in Opera and Symphony performances like VALIS (2023), Brain Opera (2022), Flow Symphony (2024). His work has been published and exhibited in the MIT Press, Harvard Tech Review, Washington Post, Boston Globe, Conferences of Computational Creativity, ISEA Brisbane, IFA Stuttgart, CVPR 2024, Burning Man 2023, Copyright Society 2023, Bloomberg Law, Seoul Arts Center, etc.

Prior to joining MIT, he did a Masters in Music Technology at UPF, Barcelona and Bachelors in Technology at the Indian Institute of Technology Madras. He is passionate about a creative future where every individual can express, reflect, create and connect through music. Manaswi is also a founding instigator of the Music Tech Community in India and has organized workshops, hackathons and community events to foster a future of music and technology in his home country. In his free time, Manaswi loves collecting weird sounds.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon
https://twitter.com/audiodevcon
https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#ai #artificialintelligence #artificialintelligencetechnology #adc #audiodev #audio #conferenceaudio #audioprocessing #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , ,

Pro Tools Scripting SDK and AI: Driving Workflows & In-App Help – Paul Vercelotti & Sam Butler ADC23

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Pro Tools Scripting SDK and AI: Driving Workflows and In-App Help - Paul Vercelotti & Sam Butler - ADC 2023

Last year at ADC, Avid announced a new and free Pro Tools scripting SDK which allows third-party developers to create solutions that tightly integrate with Pro Tools in ways that have not been possible before. Continuing the conversation that started last year, Avid will present at ADC '23 a technical preview that shows how the power of large language models can be combined with the Pro Tools scripting SDK to automate workflows and assist users. In addition, Avid will update the development community on the status of the SDK program.

Link to Slides:
_

Paul Vercelotti

Paul Vercellotti is a software architect at Avid Audio and the technical / architectural lead for Pro Tools. He focuses on architectural design direction for current and future Avid Audio products and technical leadership for the Avid Audio engineering team. He has been creating audio software for over 25 years and is passionate about solving the fun and challenging problems of audio and music.
_

Sam Butler

Sam has worked at Avid for over 20 years, starting off in technical support for Sibelius, running public demos, putting sound libraries together for the Sibelius Sounds libraries, then moving to product management in 2013. In the past decade, Sam has product managed projects to put Avid solutions into the cloud and on mobile, helped spearhead the modernisation of our infrastructure and kept the features rolling. Now Director of Product Management for Sibelius and the Audio SDKs, Sam works with all the departments in Avid to produce the future of the audio products and solutions.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #ai #dsp #audio #protools

Filed under: UncategorizedTagged with: , , , ,

Deep Learning for DSP Engineers: Challenges & Tricks for Audio AI – Franco Caspe & Andrea Martelloni

https://audio.dev/ -- @audiodevcon​

Deep Learning for DSP Engineers: Challenges and Tricks for Audio AI - Franco Caspe & Andrea Martelloni - ADC23

This talk aims to tackle and demystify the process of the development of an AI-based musical instrument, audio tool or effect. We want to view this process not from the point of view of technical frameworks and technical challenges, but from that of the design process, the knowledge required and the learning curve needed to be productive with AI tools; particularly if one approaches AI from an audio DSP background, which was our situation when we started out.

We are going to quickly survey the current applications of AI for real-time music making, and reflect on the challenges that we found, especially with current learning resources. We will then walk through the process of developing a real-time audio model based on deep learning, from dataset to deployment, highlighting the relevant aspects for those with a DSP background. Finally, we will describe how we applied that process to our own PhD projects, the HITar and the Bessel’s Trick.

Link to Slides:
_

Franco Caspe

I’m an electronic engineer, a maker, hobbyist musician and a PhD Student at the Artificial Intelligence and Music CDT at Queen Mary University of London. I have experience in development of real-time systems for applications such as communication, neural network inference, and DSP. I play guitar and I love sound design, so in my PhD I set out to find ways to bridge the gap that separates acoustic instruments and synthesizers, using AI as an analysis tool for capturing performance features present in the instruments’ audio, and as a generation tool for synthetic sound rendering.
_

Andrea Martelloni

Inventor of the HITar. Interested in applications of deep learning for rich real-time musical interaction and expressive digital musical instruments.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #dsp #audio #ai #deeplearning

Filed under: UncategorizedTagged with: , , , ,

Inference Engines and Audio – Harriet Drury – ADC23

https://audio.dev/ -- @audiodevcon​

Inference Engines and Audio - Harriet Drury - ADC 2023

Machine learning has become a buzzword in recent years, but how does it actually work? This talk aims to introduce and explain inference pipelines. We’ll look at commonly used inference engines, how they work, their suitability for use in audio applications, and how to go about creating your own.

Also introduced will be an approach to writing a custom inference engine for the Cmajor platform.

Link to Slides: https://data.audio.dev/talks/2023/inference-engines-and-audio/slides.pdf
_

Harriet Drury
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audio #audiotech #machinelearning

Filed under: UncategorizedTagged with: , , , ,

Odd Challenges of Using Deep Learning in Designing a Feedback Delay Network Reverb – Wojciech Kacper Werkowicz & Benjamin Whateley

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Odd Challenges of Designing a Feedback Delay Network Reverb With Deep Learning - Wojciech Kacper Werkowicz & Benjamin Whateley - ADC 2023

Past lustrum have seen the rise of interest in optimization of audio effects and synthesizer parameters in use cases including parameter inference from audio input, as well as approaches for Differentiable Digital Signal Processing (such as Magenta's DDSP). However, there are still notable limitations in the area, exemplified well by the problems posed by some fundamental DSP units such as IIR filters - issues of stability, interpretability and differentiability.

In this talk, we will take on all of the above. It will be done so in the context of a research endeavour into modelling room Impulse Responses using Feedback Delay Network (FDNs). Covering a range of approaches, from naive to more advanced, we will take multiple detours to look into machine learning challenges in context of direct applications to DSP, such as approximating common transformations, tackling computational efficiency, taming the explosivity of feedback systems, at last, hopefully, differentiating the undifferentiable.
_

Wojciech Kacper Werkowicz

Programmer, computer musician, improviser from Pruszków, Poland. After being introduced to electronic music by "Ishkur's Guide" in early episode of life, his interest persisted over years. Graduated from Music Computing and Technology BSc program at Goldsmiths in 2023, where he studied under Michael Zbyszynski, Seth Horvitz and Lance Putnam. Currently surveying historical and contemporary digital synthesis methods as a part of his Masters research at Institute of Sonology, The Hague, aiming to critically contextualise synthesis technologies through the lens of sound culture and philosophy. Interested in algorithmic music, machine learning, internet culture. Often enjoys mixing lo-fi technologies with the cutting edge.
_

Benjamin Whateley
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #deeplearning #dsp #audio

Filed under: UncategorizedTagged with: , , , ,

Collaborative Songwriting & Production With Symbolic Generative AI – Sadie Allen & Anirudh Mani – ADC23

https://audio.dev/ -- @audiodevcon​

Collaborative Songwriting and Production With Symbolic Generative AI - Sadie Allen & Anirudh Mani - ADC23

Generative AI has experienced remarkable advancements in various domains, including audio and music. However, despite these breakthroughs, we have yet to reach a stage where musicians can seamlessly incorporate generative AI into their creative processes. In this talk, we will delve into the techniques, proposals, and ongoing work that can facilitate collaborative songwriting and production with machine learning.

During the session, we will explore several key topics:
• Overview of existing tools and models - we will discuss the motivations behind symbolic generation versus raw audio for music production applications. Furthermore, we will highlight the contrasting approaches and techniques that aim to augment the creative process rather than replace it entirely.
• Utilization of AI-generated MIDI as a songwriting tool - this will involve examining different ML architectures for conditional MIDI generation, as well as employing reinforcement learning (RL) to generate MIDI sequences.
• Examples showcasing how speakers and other musicians currently utilize AI-generated MIDI as part of their songwriting/production process.

Attendees will gain insights into cutting-edge techniques and research, paving the way for a future where generative AI becomes an integral part of the creative process for musicians.

Link to Slides: https://drive.google.com/file/d/15qYW-SbgmodMZ_wiMKKvH8pXmrDCZQpY/view?usp=sharing
_

Sadie Allen
_

Anirudh Mani

I build creative AI tools for artists.I am the co-founder of Lemonaide Music. https://www.lemonaide.ai/ https://www.linkedin.com/in/anirudh-mani-1796934b/ https://twitter.com/anirudh3
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #ai #dsp #audio #generativeai

Filed under: UncategorizedTagged with: , , , ,

Motion and Music Modeling in Hindustani Classical Music – Tejaswinee Kelkar – ADCx India 2024

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Motion and Music Modeling in Hindustani Classical Music - Tejaswinee Kelkar - ADCx India 2024

My talk will summarize of computational generative approaches in North Indian classical music (NICM). NICM presents a unique problem where non-quantization of notes, and the predominant characteristic use of pitch contours to express sonic differentiation means that quantized modeling of, for example, sheet based music goes only so far in being able to shape generative Hindustani music. I will present these approaches of notation based, and character based RNNs for generating Hindustani improvisation.

Generative musical AI in NICM is not really described as a task. However, pre-trained generative music models are modeled after common practise period based western music, and are definitely unsuitable to generate anything in this vocabulary. Sample based generative AI for NICM has, as of this abstract not been a field with separate exploration. Musical AI in NICM is mostly explored form the point of view of modeling raga and raga recognition tasks.

In my previous work, I have addressed how phrase generation models and contour models are perceptually important for tasks such as this. I will present an overview of the state of knowledge in the intersection of these fields and the SOTA of generative techniques in NICM.

Link to Slides: https://data.audio.dev/talks/ADCxIndia/2024/rnns-and-hindustani-music.pdf
_

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC24 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Prashant Mishra

#adc #ai #audio #hindustaniclassicalmusic

Filed under: UncategorizedTagged with: , , , ,