Building models that comply with the invariances inherent to different domains, such as invariance under translation or rotation, is a key aspect of applying machine learning to real world problems like molecular property prediction, medical imaging, protein folding or LiDAR classification. For the first time, we study how the invariances of a model can be leveraged to provably guarantee the robustness of its predictions. We propose a gray-box approach, enhancing the powerful black-box randomized smoothing technique with white-box knowledge about invariances. First, we develop gray-box certificates based on group orbits, which can be applied to arbitrary models with invariance under permutation and Euclidean isometries. Then, we derive provably tight gray-box certificates. We experimentally demonstrate that the provably tight certificates can offer much stronger guarantees, but that in practical scenarios the orbit-based method is a good approximation.
翻译:构建符合不同领域内在差异的模型,例如翻译或轮换中的不轨做法,是将机器学习应用到现实世界问题的关键方面,例如分子财产预测、医学成像、蛋白折叠或LiDAR分类。我们第一次研究如何利用模型的不轨做法来保证其预测的稳健性。我们提议了一个灰箱方法,用有关不轨做法的白箱知识加强强大的黑箱随机通畅技术。首先,我们开发了基于组轨道的灰箱证书,该证书可以应用于任意模型,而这种模型的变异性在变异和Euclidean是缩略图。然后,我们获得了非常紧的灰盒证书。我们实验性地证明,强的紧凑证书可以提供更强有力的保证,但在实际情况下,基于轨道的方法是一种非常近似的方法。