Deep learning has been successfully applied to various tasks, but its underlying mechanism remains unclear. Neural networks associate similar inputs in the visible layer to the same state of hidden variables in deep layers. The fraction of inputs that are associated to the same state is a natural measure of similarity and is simply related to the cost in bits required to represent these inputs. The degeneracy of states with the same information cost provides instead a natural measure of noise and is simply related the entropy of the frequency of states, that we call relevance. Representations with minimal noise, at a given level of similarity (resolution), are those that maximise the relevance. A signature of such efficient representations is that frequency distributions follow power laws. We show, in extensive numerical experiments, that deep neural networks extract a hierarchy of efficient representations from data, because they i) achieve low levels of noise (i.e. high relevance) and ii) exhibit power law distributions. We also find that the layer that is most efficient to reliably generate patterns of training data is the one for which relevance and resolution are traded at the same price, which implies that frequency distribution follows Zipf's law.
翻译:深度学习被成功地应用于各种任务,但其基本机制仍然不清楚。神经网络将可见层的类似投入与深层的隐藏变量的相同状态联系起来。与同一状态相关的投入的一小部分是相似的自然量度,仅与代表这些输入所需的比特成本有关。同一信息成本的国家的退化提供了一种自然的噪音测量,而只是与国家频率的酶有关,我们称之为相关性。在某种相似程度(分辨率)的最小噪音表示是使相关性最大化的表示。这种高效表示的特征是频率分布遵循权力法。我们在广泛的数字实验中显示,深神经网络从数据中获取高效表达的等级,因为它们(i) 达到低水平的噪音(i.适切性) 和(ii) 显示权力法的分布。我们还发现,最能有效可靠生成培训数据模式的层是以同样的价格进行相关性和分辨率交易的,这意味着频率分布遵循Zipf的法律。