Active inference is a theory of perception, learning and decision making, which can be applied to neuroscience, robotics, and machine learning. Recently, reasearch has been taking place to scale up this framework using Monte-Carlo tree search and deep learning. The goal of this activity is to solve more complicated tasks using deep active inference. First, we review the existing literature, then, we progresively build a deep active inference agent. For two agents, we have experimented with five definitions of the expected free energy and three different action selection strategies. According to our experiments, the models able to solve the dSprites environment are the ones that maximise rewards. Finally, we compare the similarity of the representation learned by the layers of various agents using centered kernel alignment. Importantly, the agent maximising reward and the agent minimising expected free energy learn very similar representations except for the last layer of the critic network (reflecting the difference in learning objective), and the variance layers of the transition and encoder networks. We found that the reward maximising agent is a lot more certain than the agent minimising expected free energy. This is because the agent minimising expected free energy always picks the action down, and does not gather enough data for the other actions. In contrast, the agent maximising reward, keeps on selecting the actions left and right, enabling it to successfully solve the task. The only difference between those two agents is the epistemic value, which aims to make the outputs of the transition and encoder networks as close as possible. Thus, the agent minimising expected free energy picks a single action (down), and becomes an expert at predicting the future when selecting this action. This makes the KL divergence between the output of the transition and encoder networks small.
翻译:活跃的推论是感知、 学习和决策的理论, 可用于神经科学、 机器人和机器学习。 最近, 通过蒙特- 卡洛树搜索和深层学习, 进行了研究, 以扩大这个框架。 这项活动的目标是用深度积极的推论解决更复杂的任务 。 首先, 我们审查现有的文献, 然后, 我们进行感知性地构建一个深度活跃的推论剂。 对于两个代理商, 我们实验了预期的免费能源的五个定义, 以及三种不同的行动选择策略。 根据我们的实验, 能够解决 dSprite 环境的模型是获得最大回报的模型。 最后, 我们比较了各代理商在使用中心内核校准的搜索和深层学习的相似性 。 重要的是, 奖励的代理商和预期的将自由能源的最小化的表达方式 。 除了批评网络的最后一层外( 反映学习目标的差别) 以及 转型和编码网络的左层差异层 。 我们发现, 最大化的代理商比该代理商 最肯定的是, 最小化的驱动者在选择自由的能源操作中 。</s>