Consider the finite state graph that results from a simple, discrete, dynamical system in which an agent moves in a rectangular grid picking up and dropping packages. Can the state variables of the problem, namely, the agent location and the package locations, be recovered from the structure of the state graph alone without having access to information about the objects, the structure of the states, or any background knowledge? We show that this is possible provided that the dynamics is learned over a suitable domain-independent first-order causal language that makes room for objects and relations that are not assumed to be known. The preference for the most compact representation in the language that is compatible with the data provides a strong and meaningful learning bias that makes this possible. The language of structured causal models (SCMs) is the standard language for representing (static) causal models but in dynamic worlds populated by objects, first-order causal languages such as those used in "classical AI planning" are required. While "classical AI" requires handcrafted representations, similar representations can be learned from unstructured data over the same languages. Indeed, it is the languages and the preference for compact representations in those languages that provide structure to the world, uncovering objects, relations, and causes.
翻译:考虑一个简单的、 离散的、 动态的系统产生的限定状态图, 它使一个代理在矩形网格中移动, 接收和丢弃的包件。 问题的状态变量, 即代理方位置和包件位置, 能否单独从状态图的结构结构中恢复, 而不存取关于对象、 状态结构的信息, 或者任何背景知识? 我们显示, 这一点是可能的, 只要该动态是通过合适的域独立第一阶因果语言来学习, 使对象和假设不为人知的关系有空间 。 选择最紧凑的语言与数据兼容, 提供了强烈和有意义的学习偏差。 结构化因果模型的语言是代表( 静态) 因果关系模型的标准语言, 但是在由物体组成的动态世界中, 需要第一阶因果语言, 如“ 古典AI 规划” 中所使用的语言。 虽然“ 古典AI” 需要手动的表达方式, 但类似的表达方式可以从非结构化的语文数据中学习。 事实上, 这些语言的语言和缩图示的偏好于这些语言, 导致世界的构造、 和揭示。