Bivariate partial information decompositions (PIDs) characterize how the information in a "message" random variable is decomposed between two "constituent" random variables in terms of unique, redundant and synergistic information components. These components are a function of the joint distribution of the three variables, and are typically defined using an optimization over the space of all possible joint distributions. This makes it computationally challenging to compute PIDs in practice and restricts their use to low-dimensional random vectors. To ease this burden, we consider the case of jointly Gaussian random vectors in this paper. This case was previously examined by Barrett (2015), who showed that certain operationally well-motivated PIDs reduce to a closed form expression for scalar messages. Here, we show that Barrett's result does not extend to vector messages in general, and characterize the set of multivariate Gaussian distributions that reduce to closed-form. Then, for all other multivariate Gaussian distributions, we propose a convex optimization framework for approximately computing a specific PID definition based on the statistical concept of deficiency. Using simplifying assumptions specific to the Gaussian case, we provide an efficient algorithm to approximately compute the bivariate PID for multivariate Gaussian variables with tens or even hundreds of dimensions. We also theoretically and empirically justify the goodness of this approximation.
翻译:部分信息分解( PIDs) 用于描述“ 消息” 随机变量中的信息是如何在两个“ 组成” 随机变量之间从独特、 冗余和协同信息元件中分解的。 这些元件是三个变量共同分布的函数, 通常使用所有可能的联合分布空间的优化来定义这些元件。 这使得在实际中计算 PID 并限制其使用于低维随机矢量的计算具有挑战性。 为了减轻这一负担, 我们考虑了本文中联合高斯随机矢量的情况。 这个案例先前由 Barrett( 2015) 研究过, 他显示某些具有良好操作动机的 PID 减少为卡路尔信息的一种封闭形式表达方式。 在这里, 我们显示 巴雷特的结果不会扩大到一般的矢量信息, 并且将多变量分布定性为封闭式。 然后, 我们为所有其他多变量的高斯优化框架提出一个配置框架, 以基于这个统计概念的PID定义为基础, 也显示某些具有高丽度的 高尔夫里( ) 。