Some claim language models understand us. Others won't hear it. To clarify, I investigate three views of human language understanding: as-mapping, as-reliability and as-representation. I argue that while behavioral reliability is necessary for understanding, internal representations are sufficient; they climb the right hill. I review state-of-the-art language and multi-modal models: they are pragmatically challenged by under-specification of form. I question the Scaling Paradigm: limits on resources may prohibit scaled-up models from approaching understanding. Last, I describe how as-representation advances a science of understanding. We need work which probes model internals, adds more of human language, and measures what models can learn.
翻译:某些语言模型要求理解我们。 其他人听不到。 为了澄清这一点, 我调查了三种人类语言理解观点: 映射、 可靠性和代表性。 我争论说, 虽然行为可靠性对于理解来说是必要的, 但内部代表就足够了; 它们攀登了正确的山峰。 我审视了最先进的语言和多模式模式: 它们受到形式特征不足的务实挑战。 我质疑“ 缩放范式: 资源限制可能禁止扩大模式接近理解。 最后, 我描述“ 表示” 如何推进理解科学。 我们需要研究内部模式、增加更多人类语言以及衡量模型能够学习的尺度的工作。