In this work, we generalize the information bottleneck (IB) approach to the multi-view learning context. The exponentially growing complexity of the optimal representation motivates the development of two novel formulations with more favorable performance-complexity tradeoffs. The first approach is based on forming a stochastic consensus and is suited for scenarios with significant {\em representation overlap} between the different views. The second method, relying on incremental updates, is tailored for the other extreme scenario with minimal representation overlap. In both cases, we extend our earlier work on the alternating directional methods of multiplier (ADMM) solver and establish its convergence and scalability. Empirically, we find that the proposed methods outperform state-of-the-art approaches in multi-view classification problems under a broad range of modelling parameters.
翻译:在这项工作中,我们将信息瓶颈(IB)方法推广到多视角学习背景下。最佳代表方式的急剧增长的复杂性激励着两种新型的配方的开发,更有利于业绩和复杂度的权衡。第一种方法基于形成一种随机的共识,适合不同观点之间存在重大 = em 代表重叠的情景。第二种方法依靠增量更新,针对其他极端情景而专门设计,但代表重叠最小。在这两种情况下,我们扩展了先前关于乘数解决方案交替方向方法的工作,并建立了其趋同性和可伸缩性。我们偶然地发现,在广泛的建模参数下,拟议方法在多视角分类问题上优于最新方法。