We present a novel learning-based approach to compute the eigenmodes and acoustic transfer data for the sound synthesis of arbitrary solid objects. Our approach combines two network-based solutions to formulate a complete learning-based 3D modal sound model. This includes a 3D sparse convolution network as the eigendecomposition solver and an encoder-decoder network for the prediction of the Far-Field Acoustic Transfer maps (FFAT Maps). We use our approach to compute the vibration modes (eigenmodes) and FFAT maps for each mode (acoustic data) for arbitrary-shaped objects at interactive rates without any precomputed dataset for any new object. Our experimental results demonstrate the effectiveness and benefits of our approach. We compare its accuracy and efficiency with physically-based sound synthesis methods.
翻译:我们提出一种新的基于学习的方法来计算任意固态物体的健全合成所需的天体元和声学传输数据。我们的方法结合了两种基于网络的解决方案来制定一个基于学习的3D模型完整模型。这包括一个3D稀疏变形网络,作为eigendecomposition解答器和一个用于预测远地声学传输地图(FFAT地图)的编码器解码器网络。我们用我们的方法来计算每种模式的振动模式(天体元)和FFAT地图(声学数据),以交互速度计算任意形物体,而无需预先计算任何新物体的数据集。我们的实验结果显示了我们的方法的有效性和益处。我们用基于物理的合成方法来比较其准确性和效率。