Physical models of rigid bodies are used for sound synthesis in applications from virtual environments to music production. Traditional methods such as modal synthesis often rely on computationally expensive numerical solvers, while recent deep learning approaches are limited by post-processing of their results. In this work we present a novel end-to-end framework for training a deep neural network to generate modal resonators for a given 2D shape and material, using a bank of differentiable IIR filters. We demonstrate our method on a dataset of synthetic objects, but train our model using an audio-domain objective, paving the way for physically-informed synthesisers to be learned directly from recordings of real-world objects.
翻译:在从虚拟环境到音乐制作的应用中,僵硬身体的物理模型被用来进行健全的合成,诸如模型合成等传统方法往往依赖计算费用昂贵的数字解算器,而最近深层次的学习方法则受到后处理结果的限制。在这项工作中,我们提出了一个新型的端对端框架,用于培训深神经网络,利用一个不同的IR过滤器库,为特定的2D形状和材料生成模式共振器。我们用合成物体数据集展示了我们的方法,但用音频数据库的目标培训我们的模型,为从真实世界物体的录音中直接学习实际知情的合成人铺平了道路。