Understanding how agents learn to generalize -- and, in particular, to extrapolate -- in high-dimensional, naturalistic environments remains a challenge for both machine learning and the study of biological agents. One approach to this has been the use of function learning paradigms, which allow peoples' empirical patterns of generalization for smooth scalar functions to be described precisely. However, to date, such work has not succeeded in identifying mechanisms that acquire the kinds of general purpose representations over which function learning can operate to exhibit the patterns of generalization observed in human empirical studies. Here, we present a framework for how a learner may acquire such representations, that then support generalization -- and extrapolation in particular -- in a few-shot fashion. Taking inspiration from a classic theory of visual processing, we construct a self-supervised encoder that implements the basic inductive bias of invariance under topological distortions. We show the resulting representations outperform those from other models for unsupervised time series learning in several downstream function learning tasks, including extrapolation.
翻译:了解代理人如何在高维、自然环境中普遍 -- -- 特别是外推 -- -- 认识代理人如何在高维、自然环境中学习 -- -- 仍然是机器学习和生物剂研究的一个挑战。一种办法是使用功能学习范式,这种范式允许人们对光滑的天平函数进行精确的描述。然而,迄今为止,这种工作还没有成功地找到机制,获得各种一般目的的表述,而功能学习可以用来展示人类经验研究中观察到的一般化模式。在这里,我们提出了一个框架,说明学习者如何获得这种表述,然后支持一般化 -- -- 特别是外推 -- -- 以几张镜头的方式支持一般化 -- -- 特别是外推法。我们从视觉处理的经典理论中汲取了灵感,建立了一个自我控制的编码器,用以执行在地形扭曲下游功能学习任务(包括外推法)下游函数学习过程中,从其他模型中得出了不统一的时间序列。