A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning \emph{invariant representations} of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e.g.\ for fairness, privacy, etc). Despite their wide applicability, theoretical understanding of the optimal tradeoffs -- with respect to accuracy, and invariance -- achievable by invariant representations is still severely lacking. In this paper, we provide precisely such an information-theoretic analysis of such tradeoffs under both classification and regression settings. We provide a geometric characterization of the accuracy and invariance achievable by any representation of the data; we term this feasible region the information plane. We provide a lower bound for this feasible region for the classification case, and an exact characterization for the regression case, which allows us to either bound or exactly characterize the Pareto optimal frontier between accuracy and invariance. Although our contributions are mainly theoretical, a key practical application of our results is in certifying the potential sub-optimality of any given representation learning algorithm for either classification or regression tasks. Our results shed new light on the fundamental interplay between accuracy and invariance, and may be useful in guiding the design of future representation learning algorithms.
翻译:一系列广泛的机器学习应用,如隐私保留学习、算法公平以及域适应/普及等,都涉及学习旨在实现两个相互竞争的目标的数据:(a) 尽量扩大目标响应的信息或准确度,(b) 尽量扩大一套受保护特征(例如公平、隐私等)的偏差或独立性,尽管这些应用广泛适用,但对于最佳权衡的理论理解 -- -- 在准确性和偏差方面 -- -- 仍然严重缺乏。在本文件中,我们对分类和回归情况下的这种偏差提供了准确度的信息理论分析。我们提供了对数据反映所能实现的准确性和异差的几何描述;我们将这一区域称为可行的信息区域,为分类案件提供了较低的约束,对回归案例的精确度和偏差 -- -- 仍然严重缺乏理论理解 -- -- 这使我们得以约束或确切地描述出准确性和易变差之间的最佳界限。虽然我们在分类和回归环境中所做的贡献主要是理论,但我们在任何排序和变差结果的排序中,一个关键的实际代表度是学习我们的任何基础性分析结果。