We propose a novel Bayesian neural network architecture that can learn invariances from data alone by inferring a posterior distribution over different weight-sharing schemes. We show that our model outperforms other non-invariant architectures, when trained on datasets that contain specific invariances. The same holds true when no data augmentation is performed.
翻译:我们建议建立一个新颖的贝叶斯神经网络架构,通过推算不同重量共享计划的后方分布,可以单独从数据中学习差异。 我们表明,我们的模型在接受包含特定差异的数据集培训时,优于其他非变量结构。 如果不进行数据扩增,情况也是如此。