Many spatial filtering algorithms used for voice capture in, e.g., teleconferencing applications, can benefit from or even rely on knowledge of Relative Transfer Functions (RTFs). Accordingly, many RTF estimators have been proposed which, however, suffer from performance degradation under acoustically adverse conditions or need prior knowledge on the properties of the interfering sources. While state-of-the-art RTF estimators ignore prior knowledge about the acoustic enclosure, audio signal processing algorithms for teleconferencing equipment are often operating in the same or at least a similar acoustic enclosure, e.g., a car or an office, such that training data can be collected. In this contribution, we use such data to train Variational Autoencoders (VAEs) in an unsupervised manner and apply the trained VAEs to enhance imprecise RTF estimates. Furthermore, a hybrid between classic RTF estimation and the trained VAE is investigated. Comprehensive experiments with real-world data confirm the efficacy for the proposed method.
翻译:电话会议应用程序等许多用于语音捕捉的空间过滤算法可以受益于或甚至依赖相对转移功能的知识。因此,许多区域信托基金估计数字已经提出,但是,在声不良的条件下,或需要事先了解干扰源的特性,其性能会发生退化。尽管最先进的区域信托基金估计数字忽视了先前对声屏蔽的了解,但电话会议设备的音频信号处理算法往往在同样或至少相似的音屏蔽中运行,例如汽车或办公室,这样就可以收集培训数据。我们利用这些数据以不受监督的方式培训流动自动计算器(VAEs),并应用经过培训的VAEs来增加不精确的RTF估计数。此外,对传统的RTF估计和经过培训的VAE之间的混合进行了调查。与现实世界数据进行的全面实验证实了拟议方法的功效。