Modern NLP models are becoming better conversational agents than their predecessors. Recurrent Neural Networks (RNNs) and especially Long-Short Term Memory (LSTM) features allow the agent to better store and use information about semantic content, a trend that has become even more pronounced with the Transformer Models. Large Language Models (LLMs) such as GPT-3 by OpenAI have become known to be able to construct and follow a narrative, which enables the system to adopt personas on the go, adapt them and play along in conversational stories. However, practical experimentation with GPT-3 shows that there is a recurring problem with these modern NLP systems, namely that they can "get stuck" in the narrative so that further conversations, prompt executions or commands become futile. This is here referred to as the "Locked-In Problem" and is exemplified with an experimental case report, followed by practical and social concerns that are accompanied with this problem.
翻译:现代NLP模式正在变得比其前身更好的对话代理人。 经常的神经网络(NNN),特别是长期短期内存(LSTM)特征使得该代理人能够更好地储存和使用语义内容信息,这种趋势随着变异模型而变得更加明显。 众所周知,OpenAI的GPT-3等大语言模型(LLMs)能够构建和遵循一个叙事,使系统能够接受行进中的人,适应他们,并在谈话故事中玩耍。 然而,GPT-3的实践实验表明,这些现代NLP系统经常出现问题,即他们可以在叙事中“卡住 ”, 以便进一步的对话、 即时处决或命令变得徒劳无益。 这里被称为“ 问题中的Locked”, 并用实验案例报告为范例, 随之而来的是与此问题相关的实际和社会问题。