Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.
翻译:由于人工智能的快速进步,我们进入了一个技术和哲学以有趣的方式相互交织的时代。正处在这个交汇点的中心是大型语言模型(LLMs ) 。 更精密的LMs在模仿人类语言时变得越多,我们就越易受到人类形态主义的伤害,就越容易看到它们所嵌入的系统比实际的人类更像。 这一趋势由于在描述这些系统时使用“知道”、“相信”和“思考”等哲学上载的术语的自然倾向而更加明显。 为了缓解这一趋势,本文提倡反复退步,提醒自己LMs及其构成其中一部分的系统是如何起作用的。 希望科学精准性能的提高将鼓励围绕人工智能的言辞 — — 无论是在现场还是在公共领域 — — 进行哲学上的细微差别。