While code-mixing is a common linguistic practice in many parts of the world, collecting high-quality and low-cost code-mixed data remains a challenge for natural language processing (NLP) research. The proliferation of Large Language Models (LLMs) in recent times compels one to ask: can these systems be used for data generation? In this article, we explore prompting LLMs in a zero-shot manner to create code-mixed data for five languages in South East Asia (SEA) -- Indonesian, Malay, Chinese, Tagalog, Vietnamese, as well as the creole language Singlish. We find that ChatGPT shows the most potential, capable of producing code-mixed text 68% of the time when the term "code-mixing" is explicitly defined. Moreover, both ChatGPT and InstructGPT's (davinci-003) performances in generating Singlish texts are noteworthy, averaging a 96% success rate across a variety of prompts. The code-mixing proficiency of ChatGPT and InstructGPT, however, is dampened by word choice errors that lead to semantic inaccuracies. Other multilingual models such as BLOOMZ and Flan-T5-XXL are unable to produce code-mixed texts altogether. By highlighting the limited promises of LLMs in a specific form of low-resource data generation, we call for a measured approach when applying similar techniques to other data-scarce NLP contexts.
翻译:促使大型语言模型生成混合文本:东南亚语言的案例
尽管混合语是世界上许多地区的常见语言实践,但收集高质量和低成本的混合数据仍然是自然语言处理(NLP)研究面临的挑战。最近大型语言模型(LLMs)的大量增加促使人们思考:这些系统是否可以用于数据生成?在本文中,我们以零-shot的方式探索促使LLMs创建东南亚五种语言(印尼语、马来语、中文、菲律宾语、越南语)的混合数据以及克里奥尔语Singlish。我们发现,ChatGPT表现最佳,当术语“混合语”明确定义时,能够约68%地生产混合语文本。此外,ChatGPT和InstructGPT(davinci-003)在生产Singlish文本方面的表现值得注意,通过各种提示平均成功率达到了96%。然而,ChatGPT和InstructGPT的混合能力受到单词选择错误的限制,这导致了语义不准确。其他多语言模型如BLOOMZ和Flan-T5-XXL完全无法产生混合语文本。通过突出LLMs在特定形式的低资源数据生成中的有限承诺,我们呼吁在将类似技术应用于其他数据稀缺的NLP场景时采取谨慎的方法。