While code-mixing is a common linguistic practice in many parts of the world, collecting high-quality and low-cost code-mixed data remains a challenge for natural language processing (NLP) research. The proliferation of Large Language Models (LLMs) in recent times compels one to ask: can these systems be used for data generation? In this article, we explore prompting multilingual LLMs in a zero-shot manner to create code-mixed data for five languages in South East Asia (SEA) -- Indonesian, Malay, Chinese, Tagalog, Vietnamese, as well as the creole language Singlish. We find that ChatGPT shows the most potential, capable of producing code-mixed text 68% of the time when the term "code-mixing" is explicitly defined. Moreover, both ChatGPT's and InstructGPT's (davinci-003) performances in generating Singlish texts are noteworthy, averaging a 96% success rate across a variety of prompts. Their code-mixing proficiency, however, is dampened by word choice errors that lead to semantic inaccuracies. Other multilingual models such as BLOOMZ and Flan-T5-XXL are unable to produce code-mixed texts altogether. By highlighting the limited promises of LLMs in a specific form of low-resource data generation, we call for a measured approach when applying similar techniques to other data-scarce NLP contexts.
翻译:尽管语言混合是许多国家的常见语言实践,但收集高质量、低成本的语言混合数据仍然是自然语言处理(NLP)研究的一项挑战。近年来,大语言模型(LLMs)的普及使人们不禁要问:这些系统可以用于数据生成吗?在本文中,我们以零-shot方式探索提示多语言LLMs生成东南亚五种语言(印尼语、马来语、中文、他加禄语、越南语)以及克里奥语Singlish的语言混合数据。我们发现ChatGPT表现最有潜力,当明确定义“混合语言”一词时,能够产生混合文本的概率达到68%。此外,ChatGPT和InstructGPT(davinci-003)在生成Singlish文本方面的表现也值得注意,在各种提示下平均成功率为96%。然而,它们的语言混合熟练度受到了词汇选择错误的限制,从而导致语义不准确。其他多语言模型,如BLOOMZ和Flan-T5-XXL则完全无法产生混合文本。通过强调在一种特定的数据稀缺NLP生成方式中LLMs的有限承诺,我们呼吁在将类似技术应用于其他数据稀缺的NLP环境时采取谨慎的方法。