Offline estimation of the dynamical model of a Markov Decision Process (MDP) is a non-trivial task that greatly depends on the data available to the learning phase. Sometimes the dynamics of the model is invariant with respect to some transformations of the current state and action. Recent works showed that an expert-guided pipeline relying on Density Estimation methods as Deep Neural Network based Normalizing Flows effectively detects this structure in deterministic environments, both categorical and continuous-valued. The acquired knowledge can be exploited to augment the original data set, leading eventually to a reduction in the distributional shift between the true and the learnt model. In this work we extend the paradigm to also tackle non deterministic MDPs, in particular 1) we propose a detection threshold in categorical environments based on statistical distances, 2) we introduce a benchmark of the distributional shift in continuous environments based on the Wilcoxon signed-rank statistical test and 3) we show that the former results lead to a performance improvement when solving the learnt MDP and then applying the optimal policy in the real environment.
翻译:对Markov决策程序(MDP)动态模型的离线估计是一项非三重任务,在很大程度上取决于学习阶段可得到的数据。有时,模型的动态变化与当前状态和行动的某些转变有关。最近的工作表明,专家指导的管道依赖密度估计方法,以深神经网络为基础的深神经流动标准化有效检测出这种在决定性环境中的结构,既明确又连续估价。获得的知识可以用来扩大原始数据集,最终导致减少真实模型与所学模型之间的分布变化。在这项工作中,我们扩大模型的范围,还处理非决定性模型,特别是1)我们提议以统计距离为基础在绝对环境中的检测阈值,2)我们根据Wilcoxon签定的统计测试,采用连续环境中的分布变化基准,3)我们表明,在解决所学的MDP并随后在实际环境中应用最佳政策时,前一个结果会导致业绩的改善。