In this paper we analyze the classification performance of neural network structures without parametric inference. Making use of neural architecture search, we empirically demonstrate that it is possible to find random weight architectures, a deep prior, that enables a linear classification to perform on par with fully trained deep counterparts. Through ablation experiments, we exclude the possibility of winning a weight initialization lottery and confirm that suitable deep priors do not require additional inference. In an extension to continual learning, we investigate the possibility of catastrophic interference free incremental learning. Under the assumption of classes originating from the same data distribution, a deep prior found on only a subset of classes is shown to allow discrimination of further classes through training of a simple linear classifier.
翻译:在本文中,我们分析了神经网络结构的分类性能,而没有参数推理。我们利用神经结构的搜索,从经验上表明,有可能找到随机重力结构,这是一种深层的先天结构,使线性分类能够与训练有素的深层对等人员同等地进行。通过通缩实验,我们排除了赢得权重初始彩票的可能性,并确认适当的深层前科不需要额外的推理。在继续学习的延伸中,我们调查了灾难性干扰的免费渐进学习的可能性。根据来自同一数据分布的类别假设,一个深层前科显示,只有一组班级才能通过培训简单的线性分类人员来区分更多的班级。