This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text.
翻译:本文所追求的洞察力是,语言模型自然可以使智能变异操作者在精神上与进化交叉式相似。特别是,足够规模的语言模型展示了内流学习,也就是说,它们可以从少量输入模式之间的关联中学习,以产生包含这些协会的产出(也称为“微光”提示)。这一能力可以用来形成一个简单但强大的变异操作者,即促使一个具有少数基于文本的基因型(如代码、简文本句或方程式)的语言模型,并分析其与这些基因型的后代相应的输出。这种语言模型交叉式的希望(易于实施并能够利用许多不同的开放源语言模式)是,它使一个简单的机制能够演变出精致的文本表(很少有特定域的突变功能),并自然地从目前语言模型的进展中获益。本文的实验通过不断演化的二进式比特字符串、句、对等式、文本的模拟和Pythonon 代码所呈现出的一种有希望的方法结论。