Many complex problems are naturally understood in terms of symbolic concepts. For example, our concept of "cat" is related to our concepts of "ears" and "whiskers" in a non-arbitrary way. Fodor (1998) proposes one theory of concepts, which emphasizes symbolic representations related via constituency structures. Whether neural networks are consistent with such a theory is open for debate. We propose unit tests for evaluating whether a system's behavior is consistent with several key aspects of Fodor's criteria. Using a simple visual concept learning task, we evaluate several modern neural architectures against this specification. We find that models succeed on tests of groundedness, modularlity, and reusability of concepts, but that important questions about causality remain open. Resolving these will require new methods for analyzing models' internal states.
翻译:许多复杂的问题自然会从象征性概念的角度来理解。例如,我们的“猫”概念以非任意的方式与我们的“耳”和“耳耳”概念有关。Fodor(1998年)提出了一个概念理论,强调通过选区结构的象征性表述。神经网络是否与这种理论一致,可以进行辩论。我们建议进行单位测试,以评价系统的行为是否符合Fodor标准的若干关键方面。我们使用简单的视觉概念学习任务,根据这一规格评估若干现代神经结构。我们发现,模型成功地测试了基础性、模块性和概念的可恢复性,但关于因果关系的重要问题仍然开放。解决这些问题将需要新的方法来分析模型的内部状态。