The information bottleneck (IB) approach, initially introduced by [1] to assess the compression-relevance tradeoff for a remote source coding problem in communication, quickly gains popularity recently in its application to modern machine learning (ML). Unlike the use of most - if not all - IB in the literature, either for the analysis of, say deep neural networks, or as an optimization objective, in this paper, we propose to address the secrecy issue in ML, by considering the fundamental model of Gaussian mixture classification. We derive, for the first time, closed-form achievable bounds for the IB problem under the above setting, and provide precise characterization of the underlying performance-secrecy tradeoff. Experiments on both synthetic and real-world data are performed to confirm the satisfactory performance of the proposed scheme.
翻译:信息瓶颈(IB)方法最初由[1]用于评估通信中远程源码问题压缩-相关性权衡问题,后来在应用现代机器学习(ML)方面迅速获得普及。 与文献中大多数—如果不是全部的话—IB用于分析,比如深神经网络,或者作为优化目标在本文中使用的方法不同,我们提议通过考虑高西亚混合物分类的基本模式解决ML的保密问题。 我们首次从上述设置中得出IB问题的封闭式可实现界限,并准确描述基本的性能-安全权衡。 对合成数据和真实世界数据的实验是为了证实拟议方案令人满意的业绩。