We investigate a family of approximate multi-step proximal point methods, accelerated by implicit linear discretizations of gradient flow. The resulting methods are multi-step proximal point methods, with similar computational cost in each update as the proximal point method. We explore several optimization methods where applying an approximate multistep proximal points method results in improved convergence behavior. We argue that this is the result of the lowering of truncation error in approximating gradient flow
翻译:暂无翻译