The task of infomin learning aims to learn a representation with high utility while being uninformative about a specified target, with the latter achieved by minimising the mutual information between the representation and the target. It has broad applications, ranging from training fair prediction models against protected attributes, to unsupervised learning with disentangled representations. Recent works on infomin learning mainly use adversarial training, which involves training a neural network to estimate mutual information or its proxy and thus is slow and difficult to optimise. Drawing on recent advances in slicing techniques, we propose a new infomin learning approach, which uses a novel proxy metric to mutual information. We further derive an accurate and analytically computable approximation to this proxy metric, thereby removing the need of constructing neural network-based mutual information estimators. Experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can effectively remove unwanted information with limited time budget.
翻译:无名氏学习的任务旨在学习高用途的代言人,同时对特定目标缺乏信息,而后者则通过尽量减少代言人与目标之间的相互信息而实现。它有广泛的应用,从针对受保护属性的培训公平预测模型到无监督地学习,以分解的代言人为主的学习。最近关于无名氏学习的工作主要使用对抗性培训,这涉及培训神经网络以估计相互信息或其代言人,因此是缓慢和难以优化的。根据最近在剪切除技术方面的进展,我们提出了一种新的无名氏学习方法,用新的代用指标来取代相互信息。我们进一步得出了一种准确和分析上可比较的近似这一代用指标,从而消除了建立基于神经网络的相互信息估计器的需要。关于算法公平、不相交错的代言学习和领域适应的实验证实,我们的方法可以有效地消除预算有限而不需要的信息。