Forward gradients have been recently introduced to bypass backpropagation in autodifferentiation, while retaining unbiased estimators of true gradients. We derive an optimality condition to obtain best approximating forward gradients, which leads us to mathematical insights that suggest optimization in high dimension is challenging with forward gradients. Our extensive experiments on test functions support this claim.
翻译:最近引入了前方梯度来绕过在自动差异中反向调整,同时保留了真实梯度的公正估计值。 我们获得了最佳条件以获得最接近的远方梯度,这导致我们得出数学洞察力,表明高度优化对远方梯度具有挑战性。 我们在测试函数上进行的广泛实验支持了这一主张。