The Bayesian brain hypothesis postulates that the brain accurately operates on statistical distributions according to Bayes' theorem. The random failure of presynaptic vesicles to release neurotransmitters may allow the brain to sample from posterior distributions of network parameters, interpreted as epistemic uncertainty. It has not been shown previously how random failures might allow networks to sample from observed distributions, also known as aleatoric or residual uncertainty. Sampling from both distributions enables probabilistic inference, efficient search, and creative or generative problem solving. We demonstrate that under a population-code based interpretation of neural activity, both types of distribution can be represented and sampled with synaptic failure alone. We first define a biologically constrained neural network and sampling scheme based on synaptic failure and lateral inhibition. Within this framework, we derive drop-out based epistemic uncertainty, then prove an analytic mapping from synaptic efficacy to release probability that allows networks to sample from arbitrary, learned distributions represented by a receiving layer. Second, our result leads to a local learning rule by which synapses adapt their release probabilities. Our result demonstrates complete Bayesian inference, related to the variational learning method of dropout, in a biologically constrained network using only locally-learned synaptic failure rates.
翻译:贝叶斯人的大脑假设假设,大脑根据拜斯的理论,在统计分布上准确操作。先发制人的输卵管随机失败,释放神经传导器,可能允许大脑从网络参数的后传分布中抽取样本,这被解释为隐含的不确定性。以前没有显示随机失败如何使网络从观察到的分布中抽取样本,也被称为感官或残留的不确定性。两种分布的抽样能够产生概率性推断、高效搜索以及创造性或基因化问题的解决。我们证明,在基于人口编码的神经活动解释中,两种类型的分配类型都可以单独代表和抽样。我们首先定义了生物上受限制的神经网络和基于同化失败和横向抑制的采样计划。在这个框架内,我们产生基于感知性不确定性的流出,然后证明,从合成效应到释放概率的概率只能使网络从接受层所表现的任意、学习的分布样本中释放出分析。第二,我们通过在接受层中学习稳定度,结果导致当地规则的递增率。