We present a differentiable soft-body physics simulator that can be composed with neural networks as a differentiable layer. In contrast to other differentiable physics approaches that use explicit forward models to define state transitions, we focus on implicit state transitions defined via function minimization. Implicit state transitions appear in implicit numerical integration methods, which offer the benefits of large time steps and excellent numerical stability, but require a special treatment to achieve differentiability due to the absence of an explicit differentiable forward pass. In contrast to other implicit differentiation approaches that require explicit formulas for the force function and the force Jacobian matrix, we present an energy-based approach that allows us to compute these derivatives automatically and in a matrix-free fashion via reverse-mode automatic differentiation. This allows for more flexibility and productivity when defining physical models and is particularly important in the context of neural network training, which often relies on reverse-mode automatic differentiation (backpropagation). We demonstrate the effectiveness of our differentiable simulator in policy optimization for locomotion tasks and show that it achieves better sample efficiency than model-free reinforcement learning.
翻译:我们提出了一种不同的软体物理模拟器,它可以用神经网络组成不同的神经系统,作为不同的一层。与其他不同的物理方法不同,这些方法使用明确的前方模型来界定状态过渡,我们侧重于通过功能最小化定义的隐性状态过渡。隐含的数值整合方法中出现了隐含的状态过渡,它们提供了大时间步骤和极好的数值稳定性的好处,但需要特殊处理才能实现差异性,因为没有明显的可区别的前方通道。与其他隐含的差别化方法相反,这些方法要求为力量函数和Jacobian矩阵制定明确的公式,我们提出了一种基于能源的方法,使我们能够通过反模式自动区分自动和无矩阵的方式计算这些衍生物。这在界定物理模型时允许更大的灵活性和生产力,在神经网络培训中尤其重要,因为神经网络培训往往依赖反模式自动区分(反调)。我们展示了我们不同的模拟器在移动任务的政策优化中的有效性,并表明它比无模型强化学习取得更好的样本效率。