We propose a hierarchical version of dual averaging for zeroth-order online non-convex optimization - i.e., learning processes where, at each stage, the optimizer is facing an unknown non-convex loss function and only receives the incurred loss as feedback. The proposed class of policies relies on the construction of an online model that aggregates loss information as it arrives, and it consists of two principal components: (a) a regularizer adapted to the Fisher information metric (as opposed to the metric norm of the ambient space); and (b) a principled exploration of the problem's state space based on an adapted hierarchical schedule. This construction enables sharper control of the model's bias and variance, and allows us to derive tight bounds for both the learner's static and dynamic regret - i.e., the regret incurred against the best dynamic policy in hindsight over the horizon of play.
翻译:我们提出一个等级化的双向平均版本,用于零顺序在线非电流优化,即学习过程,优化者在每个阶段都面临未知的非电流损失功能,并且只能作为反馈接收所发生的损失。 拟议的政策类别依赖于构建一个在线模型,在损失信息到达时汇总信息,它由两个主要部分组成:(a) 一种适应渔业信息衡量标准(相对于环境空间的标准规范)的正规化器;(b) 一种基于经调整的分级表对问题状态空间进行有原则的探索。 这一构建使得能够更清晰地控制模型的偏差和差异,并使我们能够为学习者静态和动态的遗憾(即对后视界的最佳动态政策感到遗憾)找到紧密的界限。