This paper presents a work-in-progress DSP architecture building from the basis of the Differentiable Digital Signal Processing (DDSP) library by Engel et al. (2020). The architecture is designed to process polyphonic musical audio in real-time, making use of classical DSP methods for greater interpretability. Utilising recent advancements in lightweight polyphonic pitch detection models, multiple input audio streams can be processed simultaneously, and with a novel stochastic latent dimension, the model can generate novel audio timbres outside of the training dataset. Due to its lightweight nature, the proposed architecture is designed to be used for live audio transformations with minimal input latency. The paper also discusses the limitations of the existing state-of the-art model, which is deterministic and restricted to monophonic processing. Throughout, the paper explores potential applications of the proposed model. These include not only versatile timbre transfer between distinct instruments but interpolation between timbres, resulting in the creation of new sounds that can expand the aural pallet of musicians, sound designers, and experimental composers using live electronics. Furthermore, the model extends the library’s toolkit, such as natural pitch shifting and room acoustic reverb modelling to previously unusable polyphonic inputs.
|Title of host publication||Music in the AI Era|
|Subtitle of host publication||International Symposium on Computer Music Multidisciplinary Research (CMMR 2023)|
|Publication status||Accepted/In press - 10 Jul 2023|
- Digital Signal Processing
- Machine Learning
- Timbre Transfer