Languages are not created randomly but rather to communicate information. There is a strong association between languages and their underlying meanings, resulting in a sparse joint distribution that is heavily peaked according to their correlations. Moreover, these peak values happen to match with the marginal distribution of languages due to the sparsity. With the advent of LLMs trained on big data and large models, we can now precisely assess the marginal distribution of languages, providing a convenient means of exploring the sparse structures in the joint distribution for effective inferences. In this paper, we categorize languages as either unambiguous or {\epsilon}-ambiguous and present quantitative results to demonstrate that the emergent abilities of LLMs, such as language understanding, in-context learning, chain-of-thought prompting, and effective instruction fine-tuning, can all be attributed to Bayesian inference on the sparse joint distribution of languages.
翻译:语言并非随机生成,而是为了传递信息而产生的。语言与其潜在含义之间存在着很强的关联,导致联合分布非常稀疏,但在相关性方面有着很强的峰值。此外,由于这种稀疏性,这些峰值恰好与语言的边际分布相匹配。随着大数据和大模型训练的LRLM的出现,我们现在可以精确评估语言的边际分布,从而提供了一种有效推断语言稀疏结构的方便手段。在本文中,我们将语言分为明确和{ \epsilon }—模糊两类,并提供定量结果来证明LRLM的新兴能力,如语言理解,上下文学习,思维链提示和有效指导微调,都可以归因于对语言稀疏联合分布的贝叶斯推断。