We introduce an efficient approach for optimization over orthogonal groups on highly parallel computation units such as GPUs or TPUs. As in earlier work, we parametrize an orthogonal matrix as a product of Householder reflections. However, to overcome low parallelization capabilities of computing Householder reflections sequentially, we propose employing an accumulation scheme called the compact WY (or CWY) transform -- a compact parallelization-friendly matrix representation for the series of Householder reflections. We further develop a novel Truncated CWY (or T-CWY) approach for Stiefel manifold parametrization which has a competitive complexity and, again, yields benefits when computed on GPUs and TPUs. We prove that our CWY and T-CWY methods lead to convergence to a stationary point of the training objective when coupled with stochastic gradient descent. We apply our methods to train recurrent neural network architectures in the tasks of neural machine translation and video prediction.
翻译:我们为高度平行的计算单位(如GPUs或TPUs)引入了一种优化对正方形组进行优化的有效方法。和先前的工作一样,我们将正方形矩阵作为家庭式反射的产物进行对称。然而,为了克服家庭式反射的低平行计算能力,我们提议采用一种称为CWY(或CWY)转换的累积计划 -- -- 一种对住户式一系列反射进行对称的紧凑平行友好的基质代表。我们进一步开发了一种新型的CWY(或T-CWY)系统(或T-CWY)系统,用于Stiefel多重反光化,这种方法具有竞争复杂性,并在计算GPUs和TEPs时产生效益。我们证明,我们的CWY和T-CWY系统方法在与随机梯度梯度梯度下降的同时,可以使培训目标的固定点趋于一致。我们运用了在神经机器翻译和视频预报任务中培训经常性神经网络结构的方法。