Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results. We present a new approach that combines differentiable rendering with image translation to achieve temporal consistency over indefinite timescales, using surface consistency losses and \emph{neural neural textures}. We call this algorithm TRITON (Texture Recovering Image Translation Network): an unsupervised, end-to-end, stateless sim2real algorithm that leverages the underlying 3D geometry of input scenes by generating realistic-looking learnable neural textures. By settling on a particular texture for the objects in a scene, we ensure consistency between frames statelessly. Unlike previous algorithms, TRITON is not limited to camera movements -- it can handle the movement of objects as well, making it useful for downstream tasks such as robotic manipulation.
翻译:不可调和的图像翻译算法可以用于模拟任务, 但许多人无法产生时间一致的结果 。 我们展示了一种新的方法, 将不同的图像和图像翻译结合起来, 以便在不定期的时间尺度上实现时间一致性, 使用表面一致性损失和 emph{ 神经神经质素。 我们称之为TRITON 算法( Texture Recovering 图像翻译网络 ): 一种不受监督的、 端到端的、 无国籍的 Sim2 real 算法, 通过生成现实的、 视觉的可学习的神经质素来利用输入场的基本三维几何结构。 我们通过在某个场景中解决特定对象的纹理, 确保框架之间的一致性。 与以往的算法不同, TRITON并不局限于相机的移动, 它可以处理物体的移动, 从而对下游任务( 如机器人操作) 有用 。