This paper proposes a new sequential model learning architecture to solve partially observable Markov decision problems. Rather than compressing sequential information at every timestep as in conventional recurrent neural network-based methods, the proposed architecture generates a latent variable in each data block with a length of multiple timesteps and passes the most relevant information to the next block for policy optimization. The proposed blockwise sequential model is implemented based on self-attention, making the model capable of detailed sequential learning in partial observable settings. The proposed model builds an additional learning network to efficiently implement gradient estimation by using self-normalized importance sampling, which does not require the complex blockwise input data reconstruction in the model learning. Numerical results show that the proposed method significantly outperforms previous methods in various partially observable environments.
翻译:本文件提出一个新的顺序学习模式架构,以解决部分可见的Markov 决策问题。拟议架构不是在常规常规经常性神经网络方法等每个时间步骤压缩顺序信息,而是在每一个数据区块中生成一个潜伏变量,其长度为多个时段,并将最相关的信息传递到下一个区块,以便优化政策。拟议的块状相继模式基于自省实施,使该模式能够在部分可观测环境中进行详细的顺序学习。拟议模式通过使用自我标准化重要性抽样,建立一个额外的学习网络,以高效实施梯度估算,这不需要在模型学习中进行复杂的块状输入数据重建。数字结果显示,拟议方法大大优于不同部分可观测环境中的以往方法。