Deep learning owes much of its success to the astonishing expressiveness of neural networks. However, this comes at the cost of complex, black-boxed models that extrapolate poorly beyond the domain of the training dataset, conflicting with goals of finding analytic expressions to describe science, engineering and real world data. Under the hypothesis that the hierarchical modularity of such laws can be captured by training a neural network, we introduce OccamNet, a neural network model that finds interpretable, compact, and sparse solutions for fitting data, \`{a} la Occam's razor. Our model defines a probability distribution over a non-differentiable function space. We introduce a two-step optimization method that samples functions and updates the weights with backpropagation based on cross-entropy matching in an evolutionary strategy: we train by biasing the probability mass toward better fitting solutions. OccamNet is able to fit a variety of symbolic laws including simple analytic functions, recursive programs, implicit functions, simple image classification, and can outperform noticeably state-of-the-art symbolic regression methods on real world regression datasets. Our method requires minimal memory footprint, does not require AI accelerators for efficient training, fits complicated functions in minutes of training on a single CPU, and demonstrates significant performance gains when scaled on a GPU. Our implementation, demonstrations and instructions for reproducing the experiments are available at https://github.com/druidowm/OccamNet_Public.
翻译:深层次的学习在很大程度上归功于神经网络的惊人外观性。然而,这是以复杂、黑箱模型的代价而来的,这些模型在培训数据集领域以外进行极差的推断,与寻找分析性表达方式来描述科学、工程和真实世界数据的目标相冲突。根据这种法律的分层模块性可以通过培训神经网络来捕获这一假设,我们引入了OccamNet,这是一个神经网络模型,该模型发现对数据、 Qaa} la Occam的剃须刀可以解释、紧凑和稀疏的解决方案。我们的模型定义了一个非差异功能空间的概率分布。我们引入了两步优化方法,根据跨元素匹配的战略,用反结构的表达方式来运行并更新重量。我们通过偏重概率质量来培训这类法律的分级模块,我们引入了多种象征性法律,包括简单的解析功能、累回程序、隐含功能、简单图像分类,并且能够超越可识别的状态的功能空间。我们引入了两步不相异的模拟的缩缩缩缩缩缩缩缩图,我们的方法要求在真实的缩缩缩略的缩缩图中,在真实的缩微的缩缩缩缩图中需要大量的缩缩缩缩略的缩略的缩略的缩化的缩化的缩缩化的缩化的缩缩化的缩缩缩化的缩化的缩略图。