Iterative refinement -- start with a random guess, then iteratively improve the guess -- is a useful paradigm for representation learning because it offers a way to break symmetries among equally plausible explanations for the data. This property enables the application of such methods to infer representations of sets of entities, such as objects in physical scenes, structurally resembling clustering algorithms in latent space. However, most prior works differentiate through the unrolled refinement process, which can make optimization challenging. We observe that such methods can be made differentiable by means of the implicit function theorem, and develop an implicit differentiation approach that improves the stability and tractability of training by decoupling the forward and backward passes. This connection enables us to apply advances in optimizing implicit layers to not only improve the optimization of the slot attention module in SLATE, a state-of-the-art method for learning entity representations, but do so with constant space and time complexity in backpropagation and only one additional line of code.
翻译:迭代精细 -- -- 从随机猜测开始,然后迭接地改进猜想 -- -- 是一个有用的代表性学习范例,因为它提供了打破数据同样可信的解释之间的对称性的方法。这种属性使得能够应用这种方法来推断各种实体的表达方式,例如物理场景中的物体,在潜在空间中结构相似的群集算法等。然而,大多数先前的工程通过未调整的精细细细细细过程而有所区别,这可以使优化具有挑战性。我们观察到,通过隐含功能的定理,可以使这些方法变得不同。我们发现,通过将前向和后向相脱钩,可以形成一种隐含的差别化方法,提高培训的稳定性和可移动性。这种连接使我们能够在优化隐含层中应用进步,不仅改进SLATE中的时间注意模块的优化,这是学习实体代表的一种最先进的方法,而且以恒定的空间和时间复杂性进行回调,而且只有另外一行的代码。