The Abstraction and Reasoning Corpus (ARC) is a set of tasks that tests an agent's ability to flexibly solve novel problems. While most ARC tasks are easy for humans, they are challenging for state-of-the-art AI. How do we build intelligent systems that can generalize to novel situations and understand human instructions in domains such as ARC? We posit that the answer may be found by studying how humans communicate to each other in solving these tasks. We present LARC, the Language-annotated ARC: a collection of natural language descriptions by a group of human participants, unfamiliar both with ARC and with each other, who instruct each other on how to solve ARC tasks. LARC contains successful instructions for 88\% of the ARC tasks. We analyze the collected instructions as `natural programs', finding that most natural program concepts have analogies in typical computer programs. However, unlike how one precisely programs a computer, we find that humans both anticipate and exploit ambiguities to communicate effectively. We demonstrate that a state-of-the-art program synthesis technique, which leverages the additional language annotations, outperforms its language-free counterpart.
翻译:《抽象与理性公司》(ARC)是一组任务,用来测试一个代理人灵活解决新问题的能力。虽然大多数ARC的任务对于人类来说是容易的,但对于最先进的AI来说是挑战的。我们如何建立智能系统,在诸如ARC这样的领域能够概括各种新情况并理解人类的指示?我们假设,答案可以通过研究人类在解决这些任务时如何相互沟通来找到。我们介绍了LARC, 语言附加说明的ARC:一组人类参与者的自然语言描述集,既不熟悉ARC,又相互不熟悉,他们互相指导如何解决ARC的任务。LAC为ARC的任务提供了成功的指导。我们把所收集的指示分析为`自然程序',发现大多数自然程序概念在典型的计算机程序中有相似之处。然而,与一个精确的计算机程序不同,我们发现人类既预测又利用模糊性来有效沟通。我们证明,一种最先进的程序合成技术能够利用额外的语言说明,超越其无语言对应方。