In this work we explore the information processing inside neural networks using logistic regression probes \cite{probes} and the saturation metric \cite{featurespace_saturation}. We show that problem difficulty and neural network capacity affect the predictive performance in an antagonistic manner, opening the possibility of detecting over- and under-parameterization of neural networks for a given task. We further show that the observed effects are independent from previously reported pathological patterns like the ``tail pattern'' described in \cite{featurespace_saturation}. Finally we are able to show that saturation patterns converge early during training, allowing for a quicker cycle time during analysis
翻译:在这项工作中,我们利用后勤回归探测器(cite{probes})和饱和度度指标(cite{fatatspace_饱和度)探索神经网络内的信息处理。我们发现,问题和神经网络能力以对抗的方式影响预测性能,为发现某一任务神经网络的超度和下度参数化打开了可能性。我们进一步显示,观察到的影响独立于先前报告的病理模式,如\ cite{feturespace_饱和度中描述的“尾线模式 ” 。我们最后能够显示,饱和性模式在培训初期就趋同,从而在分析期间可以更快的周期时间。