Quantifying similarity between neural representations -- e.g. hidden layer activation vectors -- is a perennial problem in deep learning and neuroscience research. Existing methods compare deterministic responses (e.g. artificial networks that lack stochastic layers) or averaged responses (e.g., trial-averaged firing rates in biological data). However, these measures of deterministic representational similarity ignore the scale and geometric structure of noise, both of which play important roles in neural computation. To rectify this, we generalize previously proposed shape metrics (Williams et al. 2021) to quantify differences in stochastic representations. These new distances satisfy the triangle inequality, and thus can be used as a rigorous basis for many supervised and unsupervised analyses. Leveraging this novel framework, we find that the stochastic geometries of neurobiological representations of oriented visual gratings and naturalistic scenes respectively resemble untrained and trained deep network representations. Further, we are able to more accurately predict certain network attributes (e.g. training hyperparameters) from its position in stochastic (versus deterministic) shape space.
翻译:在深层学习和神经科学研究中,对确定性反应(例如,缺乏随机层的人工网络)或平均反应(例如,生物数据中的试验平均燃烧率)进行比较的现有方法比较确定性反应(例如,缺乏随机层的人工网络)或平均反应(例如,生物数据中的试验平均燃烧率),但是,这些确定性代表性的测量方法忽略了噪音的规模和几何结构,两者在神经计算中起着重要作用。为了纠正这一点,我们推广了先前提议的形状测量(Williams等人,2021年),以量化诊断性表现的差异。这些新的距离满足三角间的不平等,因此可以用作许多受监督和不受监督的分析的严格基础。我们利用这个新的框架,我们发现,定向视觉定位和自然场景的神经生物学表现的随机性地理特征,分别类似于未经训练的深网络描述。此外,我们能够更准确地预测出某些网络特征(例如,训练超光谱仪)来自其空间形状的某种位置(反偏差形状)。