Deep neural networks (DNNs) have achieved substantial predictive performance in various speech processing tasks. Particularly, it has been shown that a monaural speech separation task can be successfully solved with a DNN-based method called deep clustering (DC), which uses a DNN to describe the process of assigning a continuous vector to each time-frequency (TF) bin and measure how likely each pair of TF bins is to be dominated by the same speaker. In DC, the DNN is trained so that the embedding vectors for the TF bins dominated by the same speaker are forced to get close to each other. One concern regarding DC is that the embedding process described by a DNN has a black-box structure, which is usually very hard to interpret. The potential weakness owing to the non-interpretable black-box structure is that it lacks the flexibility of addressing the mismatch between training and test conditions (caused by reverberation, for instance). To overcome this limitation, in this paper, we propose the concept of explainable deep clustering (X-DC), whose network architecture can be interpreted as a process of fitting learnable spectrogram templates to an input spectrogram followed by Wiener filtering. During training, the elements of the spectrogram templates and their activations are constrained to be non-negative, which facilitates the sparsity of their values and thus improves interpretability. The main advantage of this framework is that it naturally allows us to incorporate a model adaptation mechanism into the network thanks to its physically interpretable structure. We experimentally show that the proposed X-DC enables us to visualize and understand the clues for the model to determine the embedding vectors while achieving speech separation performance comparable to that of the original DC models.
翻译:深心神经网络(DNNs) 在许多语音处理任务中取得了大量预测性能。 特别是, 已经显示, 以基于 DNN 的深集(DC) 方法可以成功解决语言语言分离任务, 这种方法使用 DNN 来描述为每个时频(TF) 书箱分配连续矢量的过程, 并测量每个TF bin 是否由同一个演讲者主宰。 在DC, DNN 接受培训, 使得由同一演讲者主导的TF bin 嵌入矢量矢量, 被迫接近对方。 有关DC 的一个关切是, DNNN 描述的嵌入过程有一个黑箱结构, 通常很难解释。 由 DNN 描述为每个时间频谱(TF) 的连续矢量矢量, 用于描述向每个时间频率(TF) 的连续矢量矢量(TF) 和测试条件之间的不匹配。 为了克服这个模式限制, 我们提议了可以解释的深度聚合(X-DC), 其网络结构结构结构可以被解释为一个可以让我们学习的直观判分解的直路路盘结构。