In this article, we propose a new variational approach to learn private and/or fair representations. This approach is based on the Lagrangians of a new formulation of the privacy and fairness optimization problems that we propose. In this formulation, we aim to generate representations of the data that keep a prescribed level of the relevant information that is not shared by the private or sensitive data, while minimizing the remaining information they keep. The proposed approach (i) exhibits the similarities of the privacy and fairness problems, (ii) allows us to control the trade-off between utility and privacy or fairness through the Lagrange multiplier parameter, and (iii) can be comfortably incorporated to common representation learning algorithms such as the VAE, the $\beta$-VAE, the VIB, or the nonlinear IB.
翻译:在本条中,我们提议采用新的变通办法,学习私人和/或公平陈述,这一办法以我们提议的隐私和公平优化问题新表述的拉格朗加人为基础,目的是对数据进行表述,以保持私人或敏感数据不共享的相关信息的一定水平,同时尽量减少其所保存的其余信息,拟议办法(一) 表明隐私和公平问题的相似性,(二) 允许我们通过拉格朗乘数参数控制公用事业与隐私或公平之间的权衡,(三) 可以很容易地纳入共同的代表性学习算法,如VAE、$\beta$-VAE、VIB或非线性IB。