Gradient based learning using error back-propagation (``backprop'') is a well-known contributor to much of the recent progress in AI. A less obvious, but arguably equally important, ingredient is parameter sharing - most well-known in the context of convolutional networks. In this essay we relate parameter sharing (``weight sharing'') to analogy making and the school of thought of cognitive metaphor. We discuss how recurrent and auto-regressive models can be thought of as extending analogy making from static features to dynamic skills and procedures. We also discuss corollaries of this perspective, for example, how it can challenge the currently entrenched dichotomy between connectionist and ``classic'' rule-based views of computation.
翻译:使用错误回推进(“ 背法” ) 的渐进式学习是已知的促进AI 中最近大部分进步的因素。 一个不那么明显但可以说同样重要的因素是参数共享—— 在革命网络中最广为人知。 在这份论文中,我们将参数共享( “ 重量共享 ” ) 与类比制作和认知隐喻的观念联系起来。 我们讨论了如何可以将反复出现和自动递减模式视为从静态特征的类比扩展至动态技能和程序。 我们还讨论了这一视角的滚动学,例如,它如何能够挑战目前联系主义者和“ 古典” 的基于规则的计算观点之间根深蒂固的分化。