Artificial and biological neural networks (ANNs and BNNs) can encode inputs in the form of combinations of individual neurons' activities. These combinatorial neural codes present a computational challenge for direct and efficient analysis due to their high dimensionality and often large volumes of data. Here we improve the computational complexity -- from factorial to quadratic time -- of direct algebraic methods previously applied to small examples and apply them to large neural codes generated by experiments. These methods provide a novel and efficient way of probing algebraic, geometric, and topological characteristics of combinatorial neural codes and provide insights into how such characteristics are related to learning and experience in neural networks. We introduce a procedure to perform hypothesis testing on the intrinsic features of neural codes using information geometry. We then apply these methods to neural activities from an ANN for image classification and a BNN for 2D navigation to, without observing any inputs or outputs, estimate the structure and dimensionality of the stimulus or task space. Additionally, we demonstrate how an ANN varies its internal representations across network depth and during learning.
翻译:人工和生物神经网络(ANNs和BNNs)能够以单个神经神经活动组合的形式对投入进行编码。这些组合神经代码对直接和高效分析提出了计算挑战,因为其高度的维度很高,而且数据量也往往很大。在这里,我们改进了以前用于小例子的直接代数和生物神经网络(ANNs和BNNs)的计算复杂性 -- -- 从因数到四等时间 -- -- 并把它们应用到实验产生的大型神经代码。这些方法提供了一种新颖而有效的方法,用以检验组合神经代码的代数、几何学和地貌特征,并揭示这些特征与神经网络的学习和经验之间的关系。我们引入了一种程序,利用信息几何方法对神经代码的内在特征进行假设测试。然后,我们将这些方法应用于从ANNe用于图像分类的神经活动,2D导航的神经活动,在不观察任何投入或产出的情况下,估计刺激空间或任务空间的结构和维度。此外,我们演示ANN如何在网络深度和学习过程中进行内部陈述。