We introduce the novel concept of anti-transfer learning for speech processing with convolutional neural networks. While transfer learning assumes that the learning process for a target task will benefit from re-using representations learned for another task, anti-transfer avoids the learning of representations that have been learned for an orthogonal task, i.e., one that is not relevant and potentially misleading for the target task, such as speaker identity for speech recognition or speech content for emotion recognition. In anti-transfer learning, we penalize similarity between activations of a network being trained and another one previously trained on an orthogonal task, which yields more suitable representations. This leads to better generalization and provides a degree of control over correlations that are spurious or undesirable, e.g. to avoid social bias. We have implemented anti-transfer for convolutional neural networks in different configurations with several similarity metrics and aggregation functions, which we evaluate and analyze with several speech and audio tasks and settings, using six datasets. We show that anti-transfer actually leads to the intended invariance to the orthogonal task and to more appropriate features for the target task at hand. Anti-transfer learning consistently improves classification accuracy in all test cases. While anti-transfer creates computation and memory cost at training time, there is relatively little computation cost when using pre-trained models for orthogonal tasks. Anti-transfer is widely applicable and particularly useful where a specific invariance is desirable or where trained models are available and labeled data for orthogonal tasks are difficult to obtain.
翻译:我们引入了用于语言处理的反转移学习的新概念,与进化神经网络相联;虽然转让学习假设目标任务的学习过程将受益于重新使用为另一项任务所学的演示,但反转让避免了学习为一项正统任务所学的演示,即与目标任务不相关并可能误导,例如,与语音识别或情感识别的语音内容的语音身份无关;在反转让学习中,我们惩罚正在培训的网络启动与先前就一项正反向任务所培训的另一项任务之间的相似性,这可以产生更合适的演示。这可以导致更好的概括化,并在一定程度上控制一些虚假或不可取的关联,例如避免社会偏差。我们实施了不同配置的反革命神经网络的转移,这些配置有几种相似的度度度度度和组合功能,我们用几种语言和音频任务和背景来评估和分析,使用六套有用的数据集。我们表明,反转移实际上导致本意的变异性任务,以及更合适的误算,因此,对于目标转移任务而言,例如避免出现某种虚假或不合宜的程度,在成本计算前,反转移时,反转移是不断进行精确的计算。