When manipulating three-dimensional data, it is possible to ensure that rotational and translational symmetries are respected by applying so-called SE(3)-equivariant models. Protein structure prediction is a prominent example of a task which displays these symmetries. Recent work in this area has successfully made use of an SE(3)-equivariant model, applying an iterative SE(3)-equivariant attention mechanism. Motivated by this application, we implement an iterative version of the SE(3)-Transformer, an SE(3)-equivariant attention-based model for graph data. We address the additional complications which arise when applying the SE(3)-Transformer in an iterative fashion, compare the iterative and single-pass versions on a toy problem, and consider why an iterative model may be beneficial in some problem settings. We make the code for our implementation available to the community.
翻译:在操纵三维数据时,有可能通过应用所谓的SE(3)-equivariant 模型确保轮换和翻译对称性得到尊重。蛋白质结构预测是显示这些对称性的任务的一个突出例子。最近在这一领域的工作成功地使用了SE(3)-equivariant模型,采用了迭代的SE(3)-equivarition 关注机制。由于这一应用的动力,我们实施了SE(3)-Tradeent的迭代版本,即SE(3)-equiquivaricat-ocal-oclient 模型,用于图形数据。我们解决了在以迭接方式应用SE(3)-Transex时产生的额外复杂问题,比较了关于一个玩具问题的迭代和单通版本,并考虑了为什么一个迭代模式在某些问题环境中可能是有益的。我们向社区提供了用于我们执行的代码。