This position paper reflects on the state-of-the-art in decision-making under uncertainty. A classical assumption is that probabilities can sufficiently capture all uncertainty in a system. In this paper, the focus is on the uncertainty that goes beyond this classical interpretation, particularly by employing a clear distinction between aleatoric and epistemic uncertainty. The paper features an overview of Markov decision processes (MDPs) and extensions to account for partial observability and adversarial behavior. These models sufficiently capture aleatoric uncertainty but fail to account for epistemic uncertainty robustly. Consequently, we present a thorough overview of so-called uncertainty models that exhibit uncertainty in a more robust interpretation. We show several solution techniques for both discrete and continuous models, ranging from formal verification, over control-based abstractions, to reinforcement learning. As an integral part of this paper, we list and discuss several key challenges that arise when dealing with rich types of uncertainty in a model-based fashion.
翻译:这份立场文件反映了在不确定情况下决策的最先进情况。 典型的假设是,概率可以充分捕捉到系统中的所有不确定性。 在本文中,重点是超越这一传统解释的不确定性,特别是采用明确区分偏向性和认知性不确定性的方法。 本文概述了Markov 决策程序(MDPs),并扩展了部分可观察性和对抗行为。 这些模型充分捕捉了偏向性不确定性,但未能有力地解释典型不确定性。 因此,我们对在更强有力的解释中表现出不确定性的所谓不确定性模型作了彻底的概述。 我们为离散和连续模型展示了几种解决办法,从正式核查、超越基于控制的抽象到强化学习。 作为本文的一个组成部分,我们列举并讨论了在以基于模型的方式处理丰富的不确定性时出现的若干关键挑战。</s>