Although equivariant machine learning has proven effective at many tasks, success depends heavily on the assumption that the ground truth function is symmetric over the entire domain matching the symmetry in an equivariant neural network. A missing piece in the equivariant learning literature is the analysis of equivariant networks when symmetry exists only partially in the domain. In this work, we present a general theory for such a situation. We propose pointwise definitions of correct, incorrect, and extrinsic equivariance, which allow us to quantify continuously the degree of each type of equivariance a function displays. We then study the impact of various degrees of incorrect or extrinsic symmetry on model error. We prove error lower bounds for invariant or equivariant networks in classification or regression settings with partially incorrect symmetry. We also analyze the potentially harmful effects of extrinsic equivariance. Experiments validate these results in three different environments.
翻译:虽然不同机体学习在许多任务中证明是有效的,但成功与否在很大程度上取决于以下假设:地面真理函数对整个域的对称性与对等性神经网络的对称性对称性。相等学习文献中缺少的一块是当对称性在域中存在时对等性网络的分析。在这项工作中,我们为这种情况提出了一个一般理论。我们提出了正确、不正确和异差的有分辨性定义,这使我们能够不断量化各种功能显示的对等性程度。然后我们研究不同程度的不正确或极端对称性对模型错误的影响。我们证明,在分类或回归设置中,差异性或异差分性网络的误差范围较低,有部分不正确的对称性。我们还分析了极端异差的潜在有害影响。实验在三个不同环境中验证了这些结果。</s>