Neural networks have gained importance as the machine learning models that achieve state-of-the-art performance on large-scale image classification, object detection and natural language processing tasks. In this paper, we consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output. These noisy models may arise from biological, physical and electronic contexts and constitute an important class of models that are relevant to the physical world. Intuitively, the number of neurons in such systems has to grow to compensate for the noise while maintaining the same level of expressive power and computation reliability. Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind. To prove this lower bound, we take an information theoretic approach and obtain a novel strong data processing inequality (SDPI), which not only generalizes the Evans-Schulman results for binary symmetric channels to general channels, but also improves the tightness drastically when applied to estimate end-to-end information contraction in networks. Our SDPI can be applied to various information processing systems, including neural networks and cellular automata. Applying the SDPI in noisy binary neural networks, we obtain our key lower bound and investigate its implications on network depth-width trade-offs, our results suggest a depth-width trade-off for noisy neural networks that is very different from the established understanding regarding noiseless neural networks. Furthermore, we apply the SDPI to study fault-tolerant cellular automata and obtain bounds on the error correction overheads and the relaxation time. This paper offers new understanding of noisy information processing systems through the lens of information theory.
翻译:随着在大规模图像分类、物体探测和自然语言处理任务方面实现最先进的中程图像性能的机器学习模型,神经网络变得更加重要。 在本文中,我们考虑到噪音的二进制神经网络,每个神经神经元都有非零的不正确输出概率。这些噪音模型可能来自生物、物理和电子环境,并构成与物理世界相关的重要模型类别。从直觉上看,这些系统中的神经元数量必须增长,以弥补噪音,同时保持同样水平的显性电力和计算可靠性。我们的关键发现是,在噪音神经网络中,需要的神经神经网络数量较少,而这种神经网络是同类神经网络中的第一个。为了证明这种较低的约束,我们采取了一种信息理论方法,并获得了一种新的强大的数据处理不平等性(SDPI),这不仅将Evans-Schulman的结果概括为普通频道的二进式对等,而且当应用来估计网络的不清晰度电路断信息收缩时,神经神经网络的紧凑度也会急剧增加。 我们的SDPI可以应用于各种信息处理系统, 包括神经内部网络和内深层分析系统。