Tensor Networks (TN) offer a powerful framework to efficiently represent very high-dimensional objects. TN have recently shown their potential for machine learning applications and offer a unifying view of common tensor decomposition models such as Tucker, tensor train (TT) and tensor ring (TR). However, identifying the best tensor network structure from data for a given task is challenging. In this work, we leverage the TN formalism to develop a generic and efficient adaptive algorithm to jointly learn the structure and the parameters of a TN from data. Our method is based on a simple greedy approach starting from a rank one tensor and successively identifying the most promising tensor network edges for small rank increments. Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function. Experiments on tensor decomposition, tensor completion and model compression tasks demonstrate the effectiveness of the proposed algorithm. In particular, our method outperforms the state-of-the-art evolutionary topology search [Li and Sun, 2020] for tensor decomposition of images (while being orders of magnitude faster) and finds efficient tensor network structures to compress neural networks outperforming popular TT based approaches [Novikov et al., 2015].
翻译:Tensor Network (TN) 提供了高效代表甚高天体的强大框架。 TN最近展示了机器学习应用的潜力,并展示了对普通高压分解模型(如塔克、高压列和高压环(TR))的一致观点。然而,从特定任务的数据中找出最佳高压网络结构具有挑战性。在这项工作中,我们利用TN的正规主义来开发一种通用和高效的适应算法,以便从数据中共同学习TN的结构和参数。我们的方法基于一种简单的贪婪方法,从一等高压开始,连续地确定最有希望的小排级高压网络边缘。我们的算法可以适应性地识别具有少量参数的TN结构,从而有效地优化了任何不同的客观功能。 有关高压分解、高压完成和模型压缩任务的实验显示了拟议算法的有效性。 特别是,我们的方法超越了以2015年普通和高压网络为基础的图像离位[Li和Sun,2020] 的状态进化表学搜索[Li和Sy, 方法, 以简单贪作成图像的调调调调图像(同时排序) 。