Previous works have established solid foundations for neural set functions, as well as effective architectures which preserve the necessary properties for operating on sets, such as being invariant to permutations of the set elements. Subsequently, Mini-Batch Consistency (MBC), the ability to sequentially process any permutation of any random set partition scheme while maintaining consistency guarantees on the output, has been established but with limited options for network architectures. We further study the MBC property in neural set encoding functions, establishing a method for converting arbitrary non-MBC models to satisfy MBC. In doing so, we provide a framework for a universally-MBC (UMBC) class of set functions. Additionally, we explore an interesting dropout strategy made possible by our framework, and investigate its effects on probabilistic calibration under test-time distributional shifts. We validate UMBC with proofs backed by unit tests, also providing qualitative/quantitative experiments on toy data, clean and corrupted point cloud classification, and amortized clustering on ImageNet. The results demonstrate the utility of UMBC, and we further discover that our dropout strategy improves uncertainty calibration.
翻译:先前的作品为神经设置功能奠定了坚实基础,并建立了有效的结构,维护了在机组运行的必要特性,例如对设定元素的变异性。随后,微型批量集中(MBC)在保持产出一致性保障的同时,按顺序处理任意设定分区方案的任何变异的能力已经建立,但网络结构的选项有限。我们进一步研究神经设置编码功能中的 MBC 属性,制定一种转换任意的非MBC 模型以满足MBC 的要求的方法。我们这样做,为通用MBC (UMBC) 的设定功能类别提供了一个框架。此外,我们探索了一种由我们的框架促成的有趣的辍学战略,并调查其在试验时间分配变换时对概率校准的影响。我们用单位测试支持的证据验证了MMBC,还提供了关于玩具数据的质量/定量实验,清洁和腐败的点云分级,以及图像网络的摊合组合。我们发现,UMBC的效用证明了UBC的效用,我们进一步发现我们的辍学战略改善了不确定性校准。