Speech separation aims to separate individual voice from an audio mixture of multiple simultaneous talkers. Although audio-only approaches achieve satisfactory performance, they build on a strategy to handle the predefined conditions, limiting their application in the complex auditory scene. Towards the cocktail party problem, we propose a novel audio-visual speech separation model. In our model, we use the face detector to detect the number of speakers in the scene and use visual information to avoid the permutation problem. To improve our model's generalization ability to unknown speakers, we extract speech-related visual features from visual inputs explicitly by the adversarially disentangled method, and use this feature to assist speech separation. Besides, the time-domain approach is adopted, which could avoid the phase reconstruction problem existing in the time-frequency domain models. To compare our model's performance with other models, we create two benchmark datasets of 2-speaker mixture from GRID and TCDTIMIT audio-visual datasets. Through a series of experiments, our proposed model is shown to outperform the state-of-the-art audio-only model and three audio-visual models.
翻译:语音分离的目的是将个人声音与多个同时说话者的音频混合体分开。 虽然只听音方法取得了令人满意的效果,但它们建立在处理预设条件的战略之上,限制了其在复杂的听觉场景中的应用。 对于鸡尾酒派对问题,我们提出了一个新的视听语音分离模式。 在我们的模型中,我们使用面部探测器来检测现场的发言者人数,并使用视觉信息来避免变异问题。为了提高模型对未知发言者的概括能力,我们从视觉输入中提取了与语音有关的视觉特征,明确采用对抗性分解法,并使用这一特征来帮助语音分离。此外,我们采用了时间域方法,以避免时间频域模型中存在的阶段重建问题。为了将我们的模型与其他模型进行比较,我们从全球资源数据库和TCDTIMIT视听数据集中创建了两个基点数的基数据集。通过一系列实验,我们提议的模型已经超越了状态的只听力模型和三个视听模型。