Non-native speakers show difficulties with spoken word processing. Many studies attribute these difficulties to imprecise phonological encoding of words in the lexical memory. We test an alternative hypothesis: that some of these difficulties can arise from the non-native speakers' phonetic perception. We train a computational model of phonetic learning, which has no access to phonology, on either one or two languages. We first show that the model exhibits predictable behaviors on phone-level and word-level discrimination tasks. We then test the model on a spoken word processing task, showing that phonology may not be necessary to explain some of the word processing effects observed in non-native speakers. We run an additional analysis of the model's lexical representation space, showing that the two training languages are not fully separated in that space, similarly to the languages of a bilingual human speaker.
翻译:非母语发言人在语音处理方面有困难。 许多研究认为这些困难归因于词汇记忆中的文字编码不精确。 我们测试了另一种假设:其中一些困难可能来自非母语发言人的语音感知; 我们用一种或两种语言培训一种无法听声学的语音学习计算模型; 我们首先显示该模型在电话层次和文字层次的歧视任务中表现出可预测的行为。 然后我们用语言处理任务测试该模型, 表明在非母语发言人中观察到的某些文字处理效果可能不需要音理学来解释。 我们对模型的词汇代表空间进行了进一步分析, 表明两种培训语言在该空间没有完全分开, 类似于双语人类语言。