Telepresence aims to create an immersive but virtual experience of the far-end audio and visual scene for users at the near-end. In this contribution, we propose an array-based binaural rendering system that converts the array microphone signals into the head-related transfer function (HRTF)-filtered output signals for headphone-rendering. The proposed approach is formulated on the basis of a model-matching principle (MMP) and is capable of delivering more natural immersiveness than the conventional localization-beamforming-HRTF filtering (LBH) approach. The MMP-based rendering system can be realized via multichannel inverse filtering (MIF) and multichannel deep filtering (MDF). In this study, we adopted the MDF approach and used the LBH as well as MIF as the baselines. The all-neural system jointly captures the spatial information (spatial rendering), preserves ambient sound (enhancement), and reduces noise (enhancement), prior to generating binaural outputs. Objective and subjective tests are employed to compare the proposed telepresence system with two baselines.
翻译:远程观测旨在为近端用户创造远端音频和视觉场景的隐性但虚拟经验。 我们为此提议了一个基于阵列的双向传输系统,将阵列麦克风信号转换成与头部有关的传输功能(HRTF)过滤式输出信号,用于耳机接收。 提议的方法是在模型匹配原则(MMP)的基础上制定的,能够比传统本地化-成形-HRTF过滤法(LBH)提供更自然的隐性。 基于MMP的传输系统可以通过多道反向过滤(MIF)和多道深层过滤(MDF)实现。 在这项研究中,我们采用了MDF方法,并将LBH和MIF作为基线。 全部神经系统共同捕捉到空间信息(空间成像)、保护环境声音(增强)和减少噪音(增强),然后产生双向导出。 目标和主观测试与拟议的远程定位基准进行了对比。