Category: Uncategorized

Massive leveraging of FlexBox using the JUCE C++ classes – Nick Porcaro – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Massive leveraging of FlexBox using the JUCE C++ classes - Nick Porcaro ADCx SF

UI development using the JUCE classes wrapping FlexBox particularly via PluginGUIMagic has been used for rewriting a new portable version of a GeoShred, large popular iOS music app.
Several strategic modifications and optimizations were made to PluginGUImagic to make this happen. In this talk a demonstration of app will be given and how the layouts are made.

Slides & Resources: https://data.audio.dev/talks/ADCxSF/2023/massive-leveraging-of-flexbox-using-juce/preset.pdf
https://data.audio.dev/talks/ADCxSF/2023/massive-leveraging-of-flexbox-using-juce/slides.pdf
https://data.audio.dev/talks/ADCxSF/2023/massive-leveraging-of-flexbox-using-juce/layout.pdf


_
Nick Porcaro

Nick is a software developer proficient in many languages including Objective-C, C++, PHP, Javascript, Python and Perl. He holds a B.S in electrical engineering from Texas A&M, and was a visiting scholar at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University. He was a founder of Staccato Systems, and worked for several other startup companies in a variety of fields – electronic design automation, geophysics, e-commerce, and digital audio. He has also done independent artistic work using 3D graphics with sound. In late 2011 he released an iPhone app called UndAground – New York, a cultural guide to New York City. He is also a jazz pianist.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #juce

Filed under: UncategorizedTagged with: , , , , ,

Creating Your Own Singing Voice Synthesizer: Overcoming Data Collection Challenges – by @mattricesound – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Creating Your Own Singing Voice Synthesizer: Overcoming Data Collection Challenges - Matthew Rice - ADCx SF

While singing voice synthesizers have existed for decades, recent deep-learning-based products (Sinsy, Vocaloid) have greatly improved the quality of the results. However, these systems provide only a limited number of pre-trained "voices" based on proprietary datasets. Luckily, open-source systems (NNSVS, OpenUtau, VISinger, DiffSinger) exist, allowing users to use custom datasets to create a singing voice synthesizer. Unfortunately, creating the necessary datasets is a time-consuming process that requires collecting phoneme-level timing and other data points. As a result, few public datasets are available, and those that do exist are mostly restricted to Mandarin Chinese and Japanese. In this talk, I will demonstrate several approaches to collecting this data, from manual labeling to fully automated procedures, making it easier for everyone to create their own personalized singing voice synthesizers.

Slides: https://data.audio.dev/talks/ADCxSF/2023/creating-your-own-singing-voice-synthesizer/slides.pdf
_
Matthew Rice

Matthew Rice is a master's student at Queen Mary University of London, studying Sound and Music Computing with a focus on music production applications of deep learning. Previously, Matthew was at startup Mayk as a software engineer, working on both the audio engine and audio research teams. Matthew also has experience in digital hardware and embedded systems, having worked at Qualcomm designing PMICs and audio codec drivers.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #synthesizer

Filed under: UncategorizedTagged with: , , , ,

Leveraging JUCE for Developing Spatial Audio Plugins – Scott Murakami – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Leveraging JUCE for Developing Spatial Audio Plugins - Scott Murakami - ADCx SF

With the increased availability of immersive sound experiences through headphones and speakers, spatial audio has gained popularity in recent years. In this talk, we will present our experience utilizing JUCE for building spatial audio plugins, particularly as it pertains to virtualizing real world acoustic environments. We will discuss the benefits of using JUCE from both an engineering and team management perspective, providing practical tips and best practices for leveraging the framework to create immersive and realistic spatial audio experiences. By sharing our insights, this talk aims to provide valuable information for developers interested in building spatial audio plugins using JUCE, including both experienced and novice developers alike.

_
Scott Murakami

I am passionate about audio in all its forms and approach it from different angles. I am both and engineer and an artist who has been focused on audio for many years from both perspectives, the technical and the artistic.

Currently, I work as a Software Developer for Embody making spatial audio tools for musicians, engineers, and the like. In my free time I enjoy tinkering with synthesizers and computers to create electronic music & sound art.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #logging

Filed under: UncategorizedTagged with: , , , ,

Wilsonic: Microtonal Scale Design in the DAW – Marcus Hobbs – ADCx SF

https://www.youtube.com/watch?v=4C2XdCjVnkA

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Wilsonic: Microtonal Scale Design in the DAW - Marcus Hobbs - ADCx SF

Wilsonic MTS-ESP is an interactive musical scale design tool which tunes up all of the software synthesizers in your DAW in real-time, and every tuning parameter is automatable. We will explore the most novel of Erv Wilson's scale system designs and show you how to explore and personalize your own scales to create your signature sound and emotional palettes.

Slides: https://data.audio.dev/talks/ADCxSF/2023/wilsonic-microtonal-scale-design-in-the-daw/slides.pdf
_
Marcus Hobbs

Microtonal electronica creator and audio plug-in developer. Pioneering microtonality by implementing Erv Wilson's scale designs and tuning systems. Creator of Wilsonic and Synth One. Microtonal education and software evangelist.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #midi

Filed under: UncategorizedTagged with: , , , , ,

Audio In, MIDI Out – Avrosh Kumar & Chris Latina – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Audio In, MIDI Out - Avrosh Kumar & Chris Latina - ADCx SF

Audio plugins enable musicians to apply effects to audio tracks, synthesize sounds through virtual instruments, and even transform MIDI events using MIDI effects. Mixing and mastering engineers also make use of analyzers and mastering plugins that can extract instantaneous and statistical information about the audio track to help guide the mixing process. As plugin developers, we can tinker with audio and MIDI data in several ways to augment musicians’ creative workflows.

In this talk, we will explore a special case of such plugin implementations and generate MIDI output using audio as input. This can be achieved by performing real-time content analysis on an audio track, and produce control signals as MIDI events that represent high-level features extracted from audio. These MIDI events can then be used to trigger or modulate external parameters and engines.

Audio-to-MIDI capability is already natively supported by some DAWs. They convert the melodic contents of audio to its MIDI representation and is even used by some plugins for real-time voice-to-pitch conversion. MIDI is not only a format to represent melodies and harmonies but, is excellent for producing annotated events and control signals using program and control change (CC) messages. We will leverage this in our Audio-to-MIDI plugins to potentailly create controlled chaos (think Buchla).

Slides: https://data.audio.dev/talks/ADCxSF/2023/audio-in-midi-out/slides.pdf
_
Avrosh Kumar

Speech Research Engineer @ Pindrop
Plugin Developer, Music Tech Enthusiast

Chris Latina

As a sonic explorer melding hybrid hardware and software mediums, Christopher Latina works at the crossroads of engineering, design and music composition to create immersive environments.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #midi

Filed under: UncategorizedTagged with: , , , ,

Musical Sound Coding for Cochlear Implants – Shaikat Hossain – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Musical Sound Coding for Cochlear Implants - Shaikat Hossain - ADCx SF

This presentation will provide an introduction to how cochlear implant (CI) devices work, delve into their signal processing which is based on the channel vocoder, and explore some of the limitations of contemporary CI sound coding strategies. This presentation will explore some possible approaches to improve sound coding to facilitate better music perception for CI users. Code and audio vocoder examples will be presented which simulate how CI sound processing might sound like.

Slides: musical-sound-coding-for-cochlear-implants/slides.pdf
_
Shaikat Hossain

Shaikat has worked on research in the area of psychoacoustics and improving sound coding for cochlear implant users. He also works on spatial audio for music.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #cochlearimplant

Filed under: UncategorizedTagged with: , , , ,

High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications – Romain Michon

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

High-Level Programming of FPGAs for Audio Real-Time Signal Processing Applications - Romain Michon

Field Programmable Gate Arrays (FPGAs) are notoriously hard to program, however they present significant advantages over other types of processors for real-time audio signal processing applications. In particular, they offer unparalleled performances in terms of latency, computational power, they can process very large numbers of audio channels in parallel, and run at a very high sampling rate (+10MHz). During this presentation, we'll give an overview of the opensource SyFaLa toolchain which allows for the programming of Xilinx FPGAs with the Faust programming language. We'll also present various applications focusing on different aspects of the potential offered by FPGAs in the context of audio applications: spatial audio, high sampling rate computing, active control of room acoustics, etc.

Slides: https://data.audio.dev/talks/ADCxSF/high-level-programming-of-fpgas/slides.pdf
_
Romain Michon

Romain Michon is faculty researcher at INRIA (Emeraude Team), associate professor at INSA Lyon, associate researcher at GRAME -- Centre National de Création Musicale in Lyon (France), and lecturer at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University (USA). He has been involved in the development of the Faust programming language since 2008 and he's now part of the core Faust development team at GRAME. He's currently leading (PI) the FAST project (https://fast.grame.fr) which aims at (i) facilitating the programming of FPGAs (Field-Programmable Gate Array) in the context of real-time audio signal processing and (ii) exploring the use of this type of platform for the active control of room acoustics. Romain's research interests also involve embedded systems for real-time audio processing, Human Computer Interaction (HCI), New Interfaces for Musical Expression (NIME), and physical modeling of musical instruments.

Maxime Popoff

Maxime Popoff is a PhD candidate at INSA Lyon (Institut National des Sciences Appliquées), specialized in electronic and embedded systems. He studied at Grenoble-INP and worked as an engineer at the CEA Grenoble and then at Inria where he joined the Emeraude team (INSA Lyon, Inria, GRAME) in 2020. His research work focuses on embedded audio platforms and their programming.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #logging

Filed under: UncategorizedTagged with: , , , ,

Taming Real-Time Logging: Lessons Learned From the Trenches – Chris Apple – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Taming Real-Time Logging: Lessons Learned From the Trenches - Chris Apple - ADCx SF

As audio programmers, we know the cardinal rule: "Thou Shalt Not Use Unbounded System Calls in the Realtime Thread". However, it can be useful to have log statements for debugging, posting periodic analytics, or monitoring the status of a long-running audio thread.

By combining open source libraries, we can create a simple logger that is lock-free, allocation-free, and wait-free. This logger allows us to use printf/cout/spdlog even from our rendering thread. This talk will provide a practical look at the design and creation of a real-time logger and how to avoid potential time-unbounded booby traps.

Slides: https://data.audio.dev/talks/ADCxSF/designing-smart-algorithms/slides.pdf
_
Chris Apple

Chris Apple is the Lead Audio Software Engineer at Spatial Inc. He has spent the last 8 years specializing in immersive audio at Dolby and Roblox, including working on the first implementation of Dolby Atmos Music. He has contributed to the development of immersive audio in the Roblox Game Engine, the Dolby Atmos Music Panner, and the integration of Atmos into Apple Logic Pro X.

Recently, Chris has focused on streamlining the audio pipeline at Spatial. He is passionate about modern audio programming best practices when writing performant, real-time safe code.

Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #logging

Filed under: UncategorizedTagged with: , , , ,

Designing Smart Algorithms with Traditional DSP vs. Machine Learning – Amit Shoham – ADCx SF

Join Us For ADC23 - London - 13-15 November 2023
More Info: https://audio.dev/
@audiodevcon

Designing Smart Algorithms with Traditional DSP vs. Machine Learning - Amit Shoham - ADCx SF

Smart algorithms such as wakeword detection, tempo detection, song recognition, and many others have become an integral component of countless applications. These algorithms rely on a wide range of machine learning and/or traditional DSP techniques. While machine learning techniques are now solving previously impossible problems, algorithms designed with more traditional engineering techniques often require fewer system resources and are easier to deploy. In this talk we'll compare and contrast machine learning and traditional engineering approaches, and discuss fundamental principles that will help you determine what mix of techniques is best for your application.

Slides: https://data.audio.dev/talks/ADCxSF/designing-smart-algorithms/slides.pdf
_

Amit Shoham

Amit Shoham has led dual careers as a musician and engineer. His professional roles have included DSP engineer, house music producer, DJ, computer vision engineer, remixer, deep learning engineer, mastering engineer, and troublemaker. Amit is currently a senior systems architect and algorithms guru at Artiphon, where his deep expertise in algorithm design and optimization helps bring to life innovative new musical instruments.

Streamed & Edited by Digital Medium Ltd - online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC22 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper

#audiodevcon #audiodev #deeplearning

Filed under: UncategorizedTagged with: , , , ,