Pretrained contextualized embeddings are powerful word representations for structured prediction tasks. Recent work found that better word representations can be obtained by concatenating different types of embeddings. However, the selection of embeddings to form the best concatenated representation usually varies depending on the task and the collection of candidate embeddings, and the ever-increasing number of embedding types makes it a more difficult problem. In this paper, we propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks, based on a formulation inspired by recent progress on neural architecture search. Specifically, a controller alternately samples a concatenation of embeddings, according to its current belief of the effectiveness of individual embedding types in consideration for a task, and updates the belief based on a reward. We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model, which is fed with the sampled concatenation as input and trained on a task dataset. Empirical results on 6 tasks and 21 datasets show that our approach outperforms strong baselines and achieves state-of-the-art performance with fine-tuned embeddings in the vast majority of evaluations.
翻译:经过事先培训的嵌入是结构化预测任务的强大字词表达方式。最近的工作发现,通过配置不同类型的嵌入方式,可以实现更好的字表达方式。然而,选择嵌入方式以形成最佳嵌入方式的代表方式通常因任务和候选人嵌入方式的收集而不同,而且不断增多的嵌入类型使得它成为一个更困难的问题。在本文件中,我们提议对嵌入方式进行自动整合,以根据神经结构研究最近进展的启发,为结构化预测任务寻找更好的嵌入方式。具体地说,一位控制者根据目前对个体嵌入方式在考虑某项任务时的有效性的信念,对嵌入方式的组合进行交替抽样,并更新基于奖励的信念。我们遵循了加强学习的战略,以优化控制器的参数,并根据任务模式的准确性进行奖赏,该模式与抽样组合作为投入和根据任务数据集培训而提供的奖赏。6项任务和21项的嵌入式工作结果显示6项任务和21项基本业绩评估的准确性结果,其中显示我们最强的基线。