VolumetricCondensed

Back To Schedule

Shrink Your VA Model Neural Networks!

13:00 - 13:20 UTC | Friday 1st November 2024 | ADCx Gather
Online Only

Capturing the complex nonlinear behaviors of analog circuits in virtual analog modeling is a significant challenge. Selecting the appropriate neural network size and architecture for these function approximation tasks currently relies heavily on trial-and-error methods like grid search. These approaches are time-consuming, computationally intensive, and lack a solid theoretical foundation, often resulting in oversized models that are inefficient and impractical for real-time applications. This talk introduces ideas for a framework that systematically determines optimal neural network architectures for modeling. The talk will speak about examining the geometric structures and symmetries in the complexity of the model, and the designation of neural network architectures that replicate these systems while focusing on efficient resource utilization. This approach centers specifically on function approximation, tailoring the network architecture to the mathematical functions underlying the model. Large networks memorize training data instead of learning underlying patterns, leading to poor generalization; additionally, oversized networks consume excessive computational resources, resulting in increased memory usage, adding to the challenge of hardware limitations and the necessity for low-latency responses.a as code, code as a compiler, and the output as a function. This perspective aims to streamline the development process and enhance the practical deployment of neural networks in audio signal processing.

Christopher Clarke