We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric (slot-based) representations, under a variety of sparsity-inducing constraints. We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations. Additionally, we observe that object-centric representations can be detrimental when not all objects are fully captured; a failure mode to which CNNs are less prone. These findings demonstrate the trade-offs between interpretability and performance, even for models designed to tackle relational tasks.
翻译:我们调查关系神经结构在以物体为中心(基于天线的)表现方式操作时,在各种引起紧张因素的制约下,从关系神经结构中学会的软规则的可配置性。我们发现,日益紧张,特别是在特征方面,会改善某些模型的性能,并导致更简单的关系。此外,我们发现,如果不是完全捕获所有物体,以物体为中心的表现可能有害;有线电视新闻网的失败模式不那么容易出现。这些结论表明,在可解释性和性之间,甚至在为处理相关任务而设计的模型上,也存在权衡取舍。