Offline estimation of the dynamical model of a Markov Decision Process (MDP) is a non-trivial task that greatly depends on the data available to the learning phase. Sometimes the dynamics of the model is invariant with respect to some transformations of the current state and action. Recent works showed that an expert-guided pipeline relying on Density Estimation methods as Deep Neural Network based Normalizing Flows effectively detects this structure in deterministic environments, both categorical and continuous-valued. The acquired knowledge can be exploited to augment the original data set, leading eventually to a reduction in the distributional shift between the true and the learnt model. Such data augmentation technique can be exploited as a preliminary process to be executed before the adoption of an Offline Reinforcement Learning architecture, increasing its performance. In this work we extend the paradigm to also tackle non deterministic MDPs, in particular 1) we propose a detection threshold in categorical environments based on statistical distances, and 2) we show that the former results lead to a performance improvement when solving the learnt MDP and then applying the optimal policy in the real environment.
翻译:对Markov决策程序(MDP)动态模型的离线估计是一项非三重任务,在很大程度上取决于学习阶段可得到的数据。有时,模型的动态变化与当前状态和行动的某些转变有关。最近的工作表明,专家引导的管道依赖密度估计方法,以深神经网络为基础的深神经流动标准化有效地检测了这种在决定性环境中的结构,既明确又连续估价。获得的知识可以用来扩大原始数据集,最终导致减少真实模型与所学模型之间的分布变化。这种数据增强技术可以作为在采用离线强化学习结构之前执行的初步过程加以利用,提高其性能。在这项工作中,我们还将这一模式扩大到解决非决定性的MDP,特别是:(1)我们建议根据统计距离在绝对环境中设定一个检测阈值。(2)我们表明,在解决所学的MDP之后,在实际环境中适用最佳政策时,前一个结果会导致业绩的改善。