Controller tuning is a vital step to ensure the controller delivers its designed performance. DiffTune has been proposed as an automatic tuning method that unrolls the dynamical system and controller into a computational graph and uses auto-differentiation to obtain the gradient for the controller's parameter update. However, DiffTune uses the vanilla gradient descent to iteratively update the parameter, in which the performance largely depends on the choice of the learning rate (as a hyperparameter). In this paper, we propose to use hyperparameter-free methods to update the controller parameters. We find the optimal parameter update by maximizing the loss reduction, where a predicted loss based on the approximated state and control is used for the maximization. Two methods are proposed to optimally update the parameters and are compared with related variants in simulations on a Dubin's car and a quadrotor. Simulation experiments show that the proposed first-order method outperforms the hyperparameter-based methods and is more robust than the second-order hyperparameter-free methods.
翻译:主计长调试是确保控制器交付其设计性能的关键步骤。 DiffTune 已被提议作为一种自动调试方法,将动态系统和控制器放入一个计算图,并使用自动差异来获取控制器参数更新的梯度。然而, DiffTune 使用香草梯度下移来迭接更新参数,该参数的性能主要取决于学习率的选择(作为超参数) 。 在本文中,我们提议使用无超参数更新控制器参数的方法来更新控制器参数。 我们通过最大限度地减少损失来找到最佳参数更新,在最大程度上使用基于近似状态和控制的预计损失来实现最大化。 提议了两种方法来优化更新参数,并与Dubin的汽车和夸德罗托的模拟中的相关变量进行比较。 模拟实验显示,拟议的一级方法比以超光度计为基础的方法要强,比第二级超光度超度方法更强。