Recently machine learning using neural networks (NN) has been developed, and many new methods have been suggested. These methods are optimized for the type of input data and work very effectively, but they cannot be used with any kind of input data universally. On the other hand, the human brain is universal for any kind of problem, and we will be able to construct artificial general intelligence if we can mimic the system of how the human brain works. We consider how the human brain learns things uniformly, and find that the essence of learning is the compression of information. We suggest a toy NN model which mimics the system of the human brain, and we show that the NN can compress the input information without ad hoc treatment, only by setting the loss function properly. The loss function is expressed as the sum of the self-information to remember and the loss of the information along with the compression, and its minimum corresponds to the self-information of the original data. To evaluate the self-information to remember, we provided the concept of memory. The memory expresses the compressed information, and the learning proceeds by referring to previous memories. There are many similarities between this NN and the human brain, and this NN is a realization of the free-energy principle which is considered to be a unified theory of the human brain. This work can be applied to any kind of data analysis and cognitive science.
翻译:最近利用神经网络的机器学习(NN)已经得到发展,并提出了许多新方法。这些方法在输入数据类型和工作上得到了优化,非常有效,但不能普遍使用任何输入数据。另一方面,人类大脑对任何类型的问题都是普遍性的,如果我们能够模仿人类大脑如何工作的系统,我们将能够建立人工一般智能。我们考虑人类大脑如何统一地学习事物,发现学习的本质是信息压缩。我们建议了一个模拟人类大脑系统的玩具NNN模型,我们表明NN可以不作临时处理地压缩输入信息,只有适当设置损失功能,才能加以压缩。损失功能表现为记忆的自我信息总和信息损失,同时进行压缩,以及信息与原始数据的自我信息最小匹配。为了评价自我信息,我们提供了记忆的精髓概念。记忆表示统一的信息,通过提及以前的记忆,我们显示NNN可以学习过程,我们显示NN可以将输入信息压缩,而无需临时处理,只能通过正确设置损失功能来压缩输入信息信息。损失功能表现为记忆的自我信息总和大脑认知性分析的总和人脑认知能力的任何概念都被认为是一种自由的理论。可以应用。这个理论的理论和大脑认知分析。这个理论的理论是人类的理论的理论,可以被应用到人类的理论的理论和大脑的理论。一个自由的理论的理论,可以被应用。