Neural networks (NNs) are known for their high predictive accuracy in complex learning problems. Beside practical advantages, NNs also indicate favourable theoretical properties such as universal approximation (UA) theorems. Binarized Neural Networks (BNNs) significantly reduce time and memory demands by restricting the weight and activation domains to two values. Despite the practical advantages, theoretical guarantees based on UA theorems of BNNs are rather sparse in the literature. We close this gap by providing UA theorems for fully connected BNNs under the following scenarios: (1) for binarized inputs, UA can be constructively achieved under one hidden layer; (2) for inputs with real numbers, UA can not be achieved under one hidden layer but can be constructively achieved under two hidden layers for Lipschitz-continuous functions. Our results indicate that fully connected BNNs can approximate functions universally, under certain conditions.
翻译:已知神经网络(NNs)在复杂的学习问题中具有很高的预测准确性。除了实际优势外,NNs还显示出一些有利的理论属性,如通用近似理论(UA)理论。 光学神经网络(BNNs)通过将重量和激活域限制在两个值,大大减少了时间和记忆需求。尽管有实际优势,文献中基于BNsUA理论的理论保障相当少。我们缩小了这一差距,在以下情景下为完全连接的BNs提供UA理论:(1) 对于二元化的投入,可以在一个隐藏层下建设性地实现UA;(2) 对于具有实际数字的投入,在一个隐藏层下无法实现UA,但在利普切茨连续功能的两个隐藏层下可以建设性地实现。我们的结果表明,完全连接的BNNs在某些条件下可以大致地接近功能。