This paper deals with the problem of informed source separation (ISS), where the sources are accessible during the so-called \textit{encoding} stage. Previous works computed side-information during the encoding stage and source separation models were designed to utilize the side-information to improve the separation performance. In contrast, in this work, we improve the performance of a pretrained separation model that does not use any side-information. To this end, we propose to adopt an adversarial attack for the opposite purpose, i.e., rather than computing the perturbation to degrade the separation, we compute an imperceptible perturbation called amicable noise to improve the separation. Experimental results show that the proposed approach selectively improves the performance of the targeted separation model by 2.23 dB on average and is robust to signal compression. Moreover, we propose multi-model multi-purpose learning that control the effect of the perturbation on different models individually.
翻译:本文涉及知情源分离(ISS)的问题,在所谓的“textit{encoding}”阶段,源源分离(ISS)是可以获得的。以前的作品计算编码阶段和源分离模型的侧边信息是为了利用侧信息来改进分离性能。与此形成对照的是,在这项工作中,我们改进了未使用任何侧信息的预先训练的分离模型的性能。为此,我们提议为相反的目的,即我们建议采用对抗性攻击,而不是计算干扰来降低分离性,我们计算一种不可想象的扰动,称为友好噪音来改进分离。实验结果显示,拟议的方法有选择地提高了目标分离模型的性能,平均为2.23 dB,并且能够发出压缩信号。此外,我们提议采用多模式多功能学习,以控制扰动对不同模型的影响。