Achieving efficient and robust multi-channel data learning is a challenging task in data science. By exploiting low-rankness in the transformed domain, i.e., transformed low-rankness, tensor Singular Value Decomposition (t-SVD) has achieved extensive success in multi-channel data representation and has recently been extended to function representation such as Neural Networks with t-product layers (t-NNs). However, it still remains unclear how t-SVD theoretically affects the learning behavior of t-NNs. This paper is the first to answer this question by deriving the upper bounds of the generalization error of both standard and adversarially trained t-NNs. It reveals that the t-NNs compressed by exact transformed low-rank parameterization can achieve a sharper adversarial generalization bound. In practice, although t-NNs rarely have exactly transformed low-rank weights, our analysis further shows that by adversarial training with gradient flow (GF), the over-parameterized t-NNs with ReLU activations are trained with implicit regularization towards transformed low-rank parameterization under certain conditions. We also establish adversarial generalization bounds for t-NNs with approximately transformed low-rank weights. Our analysis indicates that the transformed low-rank parameterization can promisingly enhance robust generalization for t-NNs.
翻译:实现高效和稳健的多渠道数据学习是数据科学中一项具有挑战性的任务。通过利用转型领域(即转变低级别)的低级别,高调Singular Singular 值分解(t-SVD)在多渠道数据代表方面取得了巨大成功,最近又扩展到了神经网络等具有高产品层(t-NNUS)的功能代表。然而,目前仍然不清楚t-SVD理论上如何影响t-NNS的学习行为。本文是第一个回答这个问题的,它从标准版和接受过敌对训练的t-NNNS中得出了普遍通用差错的上限。 它表明,由于精确的低级别参数化而压缩的t-NNNUS可以实现更鲜明的对抗性概括化。 在实践中,虽然t-NNNCD很少真正改变低级别的权重,但我们的分析进一步表明,通过使用梯度流动的对抗性培训,过分化的TNNNNUR启动的T(ReLU)也经过不言辞式的正规的正规的升级,在某种条件下,可以加强普通的低级别的升级的NUR化的低级别参数。</s>