Tensor Gaussian graphical models (GGMs), interpreting conditional independence structures within tensor data, have important applications in numerous areas. Yet, the available tensor data in one single study is often limited due to high acquisition costs. Although relevant studies can provide additional data, it remains an open question how to pool such heterogeneous data. In this paper, we propose a transfer learning framework for tensor GGMs, which takes full advantage of informative auxiliary domains even when non-informative auxiliary domains are present, benefiting from the carefully designed data-adaptive weights. Our theoretical analysis shows substantial improvement of estimation errors and variable selection consistency on the target domain under much relaxed conditions, by leveraging information from auxiliary domains. Extensive numerical experiments are conducted on both synthetic tensor graphs and a brain functional connectivity network data, which demonstrates the satisfactory performance of the proposed method.
翻译:Tensor Gaussian图形模型(GGMs)在解释高尔数据中的有条件独立结构时,在许多领域都有重要的应用,然而,由于获取成本高,单项研究中可获得的抗拉数据往往有限,尽管相关研究可以提供补充数据,但如何汇集此类差异性数据仍然是一个未决问题。在本文件中,我们提议为高尔格GMs(GGMs)提供一个转移学习框架,该框架充分利用信息辅助领域,即使存在非信息辅助领域,受益于精心设计的数据适应权重。我们的理论分析表明,在非常宽松的条件下,通过利用辅助领域的信息,目标领域的估计误差和可变选择一致性大有改进。我们对合成高尔格图和脑功能连接网络数据进行了广泛的数字实验,这显示了拟议方法的令人满意的表现。