We propose a new algorithm called higher-order QR iteration (HOQRI) for computing low multilinear rank approximation (LMLRA), also known as the Tucker decomposition, of large and sparse tensors. Compared to the celebrated higher-order orthogonal iterations (HOOI), HOQRI relies on a simple orthogonalization step in each iteration rather than a more sophisticated singular value decomposition step as in HOOI. More importantly, when dealing with extremely large and sparse data tensors, HOQRI completely eliminates the intermediate memory explosion by defining a new sparse tensor operation called TTMcTC (short for tensor times matrix chains times core). Furthermore, recognizing that the orthonormal constraints form a Cartesian product of Stiefel manifolds, we introduce the framework of manifold optimization and show that HOQRI guarantees convergence to the set of stationary points. Numerical experiments on synthetic and real data showcase the effectiveness of HOQRI.
翻译:暂无翻译