Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Spectral Audio Modeling: Why Did It Evolve and Do We Need It Now? - Julius Smith - ADC 2023

This talk summarizes historical developments in spectral audio synthesis and processing, touching on origins in evolution, musical practices, innovations at Bell Labs and CCRMA, and AI approaches today. Our ears continue to feed spectral decompositions to the brain, showing no signs of become vestigial in favor of purely neural processing in the brain. In machine learning, on the other hand, spectral representations are often being omitted in favor of time-domain waveform encodings. How do we reconcile this? Looking at continuing uses of spectral audio processing in AI, we find that AI and spectral processing remain partners and are likely to continue symbiotically for the foreseeable future.
_

Julius Smith

Julius O. Smith is a research engineer, educator, and musician devoted primarily to developing new technologies for music and audio signal processing. He received the B.S.E.E. degree from Rice University in 1975 (Control, Circuits, and Communication), and the M.S. and Ph.D. degrees in E.E. from Stanford University, in 1978 and 1983, respectively. For his MS/EE, he focused largely on Stanford's superb curriculum in statistical signal processing. His Ph.D. research was devoted to improved methods for digital filter design and system identification applied to music and audio systems, particularly the violin. From 1975 to 1977 he worked in the Signal Processing Department at ESL, Sunnyvale, CA, on systems for digital communications. From 1982 to 1986 he was with the Adaptive Systems Department at Systems Control Technology, Palo Alto, CA, where he worked in the areas of adaptive filtering and spectral estimation. From 1986 to 1991 he was employed at NeXT Computer, Inc., responsible for sound, music, and signal processing software for the NeXT computer workstation. After NeXT, he became a Professor at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford, with a courtesy appointment in EE, teaching courses and pursuing/supervising research related to signal processing techniques applied to music and audio systems. At varying part-time levels, he was a founding consultant for Staccato Systems, Shazam Inc., and moForte Inc. He is presently a Professor Emeritus of Music and by courtesy Electrical Engineering at Stanford, and a perennial consultant for moForte Inc. For more information, see http://ccrma.stanford.edu/~jos/.
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #dsp #audio #ai