Speech dereverberation is often an important requirement in robust speech processing tasks. Supervised deep learning (DL) models give state-of-the-art performance for single-channel speech dereverberation. Temporal convolutional networks (TCNs) are commonly used for sequence modelling in speech enhancement tasks. A feature of TCNs is that they have a receptive field (RF) dependent on the specific model configuration which determines the number of input frames that can be observed to produce an individual output frame. It has been shown that TCNs are capable of performing dereverberation of simulated speech data, however a thorough analysis, especially with focus on the RF is yet lacking in the literature. This paper analyses dereverberation performance depending on the model size and the RF of TCNs. Experiments using the WHAMR corpus which is extended to include room impulse responses (RIRs) with larger T60 values demonstrate that a larger RF can have significant improvement in performance when training smaller TCN models. It is also demonstrated that TCNs benefit from a wider RF when dereverberating RIRs with larger RT60 values.
翻译:受监督的深层学习模型为单声道语音脱节提供了最先进的性能,但文献中仍然缺乏这种分析,特别是侧重于RF。本文根据TCN的模型大小和RF对脱节性能进行了分析。使用WHAMR的实验用WHAMR外壳进行实验,该实验范围扩大到包括室脉冲反应(RIRs),其范围以较大的T60值显示,在培训较小的TCN模型时,较大的RF能够显著地改进性能。还表明,TCN在用更大的RT60值对RIR进行脱节性关系时,从更广泛的RF中受益。