Appropriate weight initialization has been of key importance to successfully train neural networks. Recently, batch normalization has diminished the role of weight initialization by simply normalizing each layer based on batch statistics. Unfortunately, batch normalization has several drawbacks when applied to small batch sizes, as they are required to cope with memory limitations when learning on point clouds. While well-founded weight initialization strategies can render batch normalization unnecessary and thus avoid these drawbacks, no such approaches have been proposed for point convolutional networks. To fill this gap, we propose a framework to unify the multitude of continuous convolutions. This enables our main contribution, variance-aware weight initialization. We show that this initialization can avoid batch normalization while achieving similar and, in some cases, better performance.
翻译:适当的权重初始化对于成功培养神经网络至关重要。 最近,批量正常化通过仅仅根据批量统计对每个层进行正常化,降低了重量初始化的作用。 不幸的是,批量正常化在适用于小批量规模时有几个缺陷,因为它们在学习点云时需要应对记忆限制。虽然有充分根据的重量初始化战略可以使批量正常化没有必要,从而避免这些缺陷,但对于点堆网络没有提出这种方法。为了填补这一空白,我们建议了一个框架,以统一众多的连续革命。这使我们能够作出主要贡献,即差异意识重量初始化。我们表明,这种初始化可以避免批量正常化,同时实现类似的,在某些情况下,改进绩效。