Large width limits have been a recent focus of deep learning research: modulo computational practicalities, do wider networks outperform narrower ones? Answering this question has been challenging, as conventional networks gain representational power with width, potentially masking any negative effects. Our analysis in this paper decouples capacity and width via the generalization of neural networks to Deep Gaussian Processes (Deep GP), a class of hierarchical models that subsume neural nets. In doing so, we aim to understand how width affects standard neural networks once they have sufficient capacity for a given modeling task. Our theoretical and empirical results on Deep GP suggest that large width is generally detrimental to hierarchical models. Surprisingly, we prove that even nonparametric Deep GP converge to Gaussian processes, effectively becoming shallower without any increase in representational power. The posterior, which corresponds to a mixture of data-adaptable basis functions, becomes less data-dependent with width. Our tail analysis demonstrates that width and depth have opposite effects: depth accentuates a model's non-Gaussianity, while width makes models increasingly Gaussian. We find there is a "sweet spot" that maximizes test set performance before the limiting GP behavior prevents adaptability, occurring at width = 1 or width = 2 for nonparametric Deep GP. These results make strong predictions about the same phenomenon in conventional neural networks: we show empirically that many neural network architectures need 10 - 500 hidden units for sufficient capacity - depending on the dataset - but further width degrades test performance.
翻译:大型宽度限制是深层学习研究的最近焦点: 摩杜罗计算实用性, 网络的广度比更小; 回答这个问题一直具有挑战性, 因为常规网络获得宽度代表力, 并有可能掩盖任何负面效应。 我们通过将神经网络向深高西进程( Deep GGP)(深高西进程)(深高西进程)(深高西进程)的一类等级模型进行概括化分析, 将神经网分为一层。 这样做, 我们的目标是了解宽度如何影响标准常规神经网络, 一旦这些网络有足够的能力来进行特定的模拟任务。 我们深GP的理论和实验结果表明, 大范围的宽度一般会损害等级模型。 令人惊讶的是, 我们证明即使不完全的深度GP能力和宽度也与高高高音频进程相交汇, 也实际上变得更浅。 与数据可调基础功能的混合, 变得不那么宽度和宽度。 我们的尾部分析表明, 宽度和深度会进一步影响: 深度会加剧模型的内脏度, 深度, 而宽度则使得模型的内空网络的内脏能力普遍有害性, 而宽度会使得2级模型的深度能模型会越来越难测测测, 使得测试结果的精确性能影响。