In many online learning or multi-armed bandit problems, the taken actions or pulled arms are ordinal and required to be monotone over time. Examples include dynamic pricing, in which the firms use markup pricing policies to please early adopters and deter strategic waiting, and clinical trials, in which the dose allocation usually follows the dose escalation principle to prevent dose limiting toxicities. We consider the continuum-armed bandit problem when the arm sequence is required to be monotone. We show that when the unknown objective function is Lipschitz continuous, the regret is $O(T)$. When in addition the objective function is unimodal or quasiconcave, the regret is $\tilde O(T^{3/4})$ under the proposed algorithm, which is also shown to be the optimal rate. This deviates from the optimal rate $\tilde O(T^{2/3})$ in the continuous-armed bandit literature and demonstrates the cost to the learning efficiency brought by the monotonicity requirement.
翻译:在许多在线学习或多武装土匪问题中,所采取的行动或拉动的武器是零星的,需要长期保持单一状态,例如动态定价,公司使用加价定价政策来吸引早期收养者,阻止战略等待,临床试验,剂量分配通常遵循剂量升级原则,以防止剂量限制毒性。我们认为,当手臂序列需要为单质时,连续武装土匪问题。我们表明,当未知目标功能是Lipschitz持续时,遗憾是O(T)美元。在目标功能是单式或准组合时,根据拟议的算法,遗憾是$\tilde O(T ⁇ 3/4})$,这也表明这是最佳的算法。这与连续武装土匪文献中的最佳速率$\tilde O(T ⁇ 2/3}不同,并表明单调要求对学习效率的成本。