We propose two new measures for extracting the unique information in $X$ and not $Y$ about a message $M$, when $X, Y$ and $M$ are joint random variables with a given joint distribution. We take a Markov based approach, motivated by questions in fair machine learning, and inspired by similar Markov-based optimization problems that have been used in the Information Bottleneck and Common Information frameworks. We obtain a complete characterization of our definitions in the Gaussian case (namely, when $X, Y$ and $M$ are jointly Gaussian), under the assumption of Gaussian optimality. We also examine the consistency of our definitions with the partial information decomposition (PID) framework, and show that these Markov based definitions achieve non-negativity, but not symmetry, within the PID framework.
翻译:我们建议采取两项新措施,以X美元而不是美元来提取独特的信息,如果X美元、Y美元和M美元是联合随机变量,同时进行某种联合分布,那么用美元而不是美元来提取信息。 我们采取以Markov为基础的方法,其动机是公平机器学习中的问题,并受到信息瓶和共同信息框架中使用的基于Markov的类似优化问题的启发。 我们在Gaussian案(即当X美元、Y美元和M美元是联合高斯人时)中,在高斯的最佳性假设下,我们得到了我们定义的完整描述。 我们还审视了我们的定义与部分信息分解框架的一致性,并表明这些基于Markov的定义在PID框架内实现了非任意性,但不是对称性。