Category: Uncategorized

Wilsonic: Microtonal Scale Design in the DAW – Marcus Hobbs – ADCx SF

https://www.youtube.com/watch?v=4C2XdCjVnkA

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Wilsonic: Microtonal Scale Design in the DAW - Marcus Hobbs - ADCx SF

Wilsonic MTS-ESP is an interactive musical scale design tool which tunes up all of the software synthesizers in your DAW in real-time, and every tuning parameter is automatable. We will explore the most novel of Erv Wilson's scale system designs and show you how to explore and personalize your own scales to create your signature sound and emotional palettes.

Slides: https://data.audio.dev/talks/ADCxSF/2023/wilsonic-microtonal-scale-design-in-the-daw/slides.pdf
_
Marcus Hobbs

Microtonal electronica creator and audio plug-in developer. Pioneering microtonality by implementing Erv Wilson's scale designs and tuning systems. Creator of Wilsonic and Synth One. Microtonal education and software evangelist.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #midi

Filed under: UncategorizedTagged with: , , , , ,

Audio In, MIDI Out – Avrosh Kumar & Chris Latina – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Audio In, MIDI Out - Avrosh Kumar & Chris Latina - ADCx SF

Audio plugins enable musicians to apply effects to audio tracks, synthesize sounds through virtual instruments, and even transform MIDI events using MIDI effects. Mixing and mastering engineers also make use of analyzers and mastering plugins that can extract instantaneous and statistical information about the audio track to help guide the mixing process. As plugin developers, we can tinker with audio and MIDI data in several ways to augment musicians’ creative workflows.

In this talk, we will explore a special case of such plugin implementations and generate MIDI output using audio as input. This can be achieved by performing real-time content analysis on an audio track, and produce control signals as MIDI events that represent high-level features extracted from audio. These MIDI events can then be used to trigger or modulate external parameters and engines.

Audio-to-MIDI capability is already natively supported by some DAWs. They convert the melodic contents of audio to its MIDI representation and is even used by some plugins for real-time voice-to-pitch conversion. MIDI is not only a format to represent melodies and harmonies but, is excellent for producing annotated events and control signals using program and control change (CC) messages. We will leverage this in our Audio-to-MIDI plugins to potentailly create controlled chaos (think Buchla).

Slides: https://data.audio.dev/talks/ADCxSF/2023/audio-in-midi-out/slides.pdf
_
Avrosh Kumar

Speech Research Engineer @ Pindrop
Plugin Developer, Music Tech Enthusiast

Chris Latina

As a sonic explorer melding hybrid hardware and software mediums, Christopher Latina works at the crossroads of engineering, design and music composition to create immersive environments.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #midi

Filed under: UncategorizedTagged with: , , , ,

Musical Sound Coding for Cochlear Implants – Shaikat Hossain – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Musical Sound Coding for Cochlear Implants - Shaikat Hossain - ADCx SF

This presentation will provide an introduction to how cochlear implant (CI) devices work, delve into their signal processing which is based on the channel vocoder, and explore some of the limitations of contemporary CI sound coding strategies. This presentation will explore some possible approaches to improve sound coding to facilitate better music perception for CI users. Code and audio vocoder examples will be presented which simulate how CI sound processing might sound like.

Slides: musical-sound-coding-for-cochlear-implants/slides.pdf
_
Shaikat Hossain

Shaikat has worked on research in the area of psychoacoustics and improving sound coding for cochlear implant users. He also works on spatial audio for music.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #cochlearimplant

Filed under: UncategorizedTagged with: , , , ,

High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications – Romain Michon

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications - Romain Michon

Field Programmable Gate Arrays (FPGAs) are notoriously hard to program, however they present significant advantages over other types of processors for real-time audio signal processing applications. In particular, they offer unparalleled performances in terms of latency, computational power, they can process very large numbers of audio channels in parallel, and run at a very high sampling rate (+10MHz). During this presentation, we'll give an overview of the opensource SyFaLa toolchain which allows for the programming of Xilinx FPGAs with the Faust programming language. We'll also present various applications focusing on different aspects of the potential offered by FPGAs in the context of audio applications: spatial audio, high sampling rate computing, active control of room acoustics, etc.

Slides: https://data.audio.dev/talks/ADCxSF/high-level-programming-of-fpgas/slides.pdf
_
Romain Michon

Romain Michon is faculty researcher at INRIA (Emeraude Team), associate professor at INSA Lyon, associate researcher at GRAME -- Centre National de Création Musicale in Lyon (France), and lecturer at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University (USA). He has been involved in the development of the Faust programming language since 2008 and he's now part of the core Faust development team at GRAME. He's currently leading (PI) the FAST project (https://fast.grame.fr) which aims at (i) facilitating the programming of FPGAs (Field-Programmable Gate Array) in the context of real-time audio signal processing and (ii) exploring the use of this type of platform for the active control of room acoustics. Romain's research interests also involve embedded systems for real-time audio processing, Human Computer Interaction (HCI), New Interfaces for Musical Expression (NIME), and physical modeling of musical instruments.

Maxime Popoff

Maxime Popoff is a PhD candidate at INSA Lyon (Institut National des Sciences Appliquées), specialized in electronic and embedded systems. He studied at Grenoble-INP and worked as an engineer at the CEA Grenoble and then at Inria where he joined the Emeraude team (INSA Lyon, Inria, GRAME) in 2020. His research work focuses on embedded audio platforms and their programming.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #logging

Filed under: UncategorizedTagged with: , , , ,

Taming Real-Time Logging: Lessons Learned From the Trenches – Chris Apple – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Taming Real-Time Logging: Lessons Learned From the Trenches - Chris Apple - ADCx SF

As audio programmers, we know the cardinal rule: "Thou Shalt Not Use Unbounded System Calls in the Realtime Thread". However, it can be useful to have log statements for debugging, posting periodic analytics, or monitoring the status of a long-running audio thread.

By combining open source libraries, we can create a simple logger that is lock-free, allocation-free, and wait-free. This logger allows us to use printf/cout/spdlog even from our rendering thread. This talk will provide a practical look at the design and creation of a real-time logger and how to avoid potential time-unbounded booby traps.

Slides: https://data.audio.dev/talks/ADCxSF/designing-smart-algorithms/slides.pdf
_
Chris Apple

Chris Apple is the Lead Audio Software Engineer at Spatial Inc. He has spent the last 8 years specializing in immersive audio at Dolby and Roblox, including working on the first implementation of Dolby Atmos Music. He has contributed to the development of immersive audio in the Roblox Game Engine, the Dolby Atmos Music Panner, and the integration of Atmos into Apple Logic Pro X.

Recently, Chris has focused on streamlining the audio pipeline at Spatial. He is passionate about modern audio programming best practices when writing performant, real-time safe code.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #logging

Filed under: UncategorizedTagged with: , , , ,

Designing Smart Algorithms with Traditional DSP vs. Machine Learning – Amit Shoham – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Designing Smart Algorithms with Traditional DSP vs. Machine Learning - Amit Shoham - ADCx SF

Smart algorithms such as wakeword detection, tempo detection, song recognition, and many others have become an integral component of countless applications. These algorithms rely on a wide range of machine learning and/or traditional DSP techniques. While machine learning techniques are now solving previously impossible problems, algorithms designed with more traditional engineering techniques often require fewer system resources and are easier to deploy. In this talk we'll compare and contrast machine learning and traditional engineering approaches, and discuss fundamental principles that will help you determine what mix of techniques is best for your application.

Slides: https://data.audio.dev/talks/ADCxSF/designing-smart-algorithms/slides.pdf
_

Amit Shoham

Amit Shoham has led dual careers as a musician and engineer. His professional roles have included DSP engineer, house music producer, DJ, computer vision engineer, remixer, deep learning engineer, mastering engineer, and troublemaker. Amit is currently a senior systems architect and algorithms guru at Artiphon, where his deep expertise in algorithm design and optimization helps bring to life innovative new musical instruments.

Streamed & Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC22 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #deeplearning

Filed under: UncategorizedTagged with: , , , ,