In this paper, we study the problem of speeding up a type of optimization algorithms called Frank-Wolfe, a conditional gradient method. We develop and employ two novel inner product search data structures, improving the prior fastest algorithm in [Shrivastava, Song and Xu, NeurIPS 2021]. * The first data structure uses low-dimensional random projection to reduce the problem to a lower dimension, then uses efficient inner product data structure. It has preprocessing time $\tilde O(nd^{\omega-1}+dn^{1+o(1)})$ and per iteration cost $\tilde O(d+n^\rho)$ for small constant $\rho$. * The second data structure leverages the recent development in adaptive inner product search data structure that can output estimations to all inner products. It has preprocessing time $\tilde O(nd)$ and per iteration cost $\tilde O(d+n)$. The first algorithm improves the state-of-the-art (with preprocessing time $\tilde O(d^2n^{1+o(1)})$ and per iteration cost $\tilde O(dn^\rho)$) in all cases, while the second one provides an even faster preprocessing time and is suitable when the number of iterations is small.
翻译:在本文中,我们研究了加速一种叫做Frank-Wolfe的优化算法问题,这是一种有条件的梯度法。我们开发并使用了两种新型内部产品搜索数据结构,改进了[Shrivastava, Song和Xu, NeurIPS 2021] 中之前最快的算法。 * 第一个数据结构使用低维随机预测,将问题降低到较低的维度,然后使用高效的内产产品数据结构。它有预处理时间$\tilde O(nd ⁇ omega-1 ⁇ dn ⁇ 1+o(1)} 美元和每升成本$\tilde O(d+n ⁇ ) $(d+ ⁇ n ⁇ rho) $(tilde O(d)+_r) $(tilde O(d) $(d_2n) +o} 。第一个数据结构利用适应性内部产品搜索数据结构最近的开发,可以对所有内产产品进行输出估计。它前处理时间为 美元(d+n+_________________时间) 提供所有处理前的进度成本, 和每个案例的更快速。 (美元) 提供一个(美元) 之前的处理前的时间和时间和速度是更快。 (a_____________________________________________________________________________________________________________________________________________________________________________________________________________________________