Distributionally Robust Optimal Control (DROC) is a framework that enables robust control in a stochastic setting where the true disturbance distribution is unknown. Traditional DROC approaches require given ambiguity sets and KL divergence bounds to represent the distributional uncertainty; however, these quantities are often unavailable a priori or require manual specification. To overcome this limitation, we propose a data-driven approach that jointly estimates the uncertainty distribution and the corresponding KL divergence bound, which we refer to as $\mathrm{D}^3\mathrm{ROC}$. To evaluate the effectiveness of our approach, we consider a car-like robot navigation task with unknown noise distributions. The experimental results show that $\mathrm{D}^3\mathrm{ROC}$ yields robust and effective control policies, outperforming iterative Linear Quadratic Gaussian (iLQG) control and demonstrating strong adaptability to varying noise distributions.
翻译:暂无翻译