Figurative language generation is the task of reformulating a given text in the desired figure of speech while still being faithful to the original context. We take the first step towards multi-figurative language modelling by providing a benchmark for the automatic generation of five common figurative forms in English. We train mFLAG employing a scheme for multi-figurative language pre-training on top of BART, and a mechanism for injecting the target figurative information into the encoder; this enables the generation of text with the target figurative form from another figurative form without parallel figurative-figurative sentence pairs. Our approach outperforms all strong baselines. We also offer some qualitative analysis and reflections on the relationship between the different figures of speech.
翻译:模拟语言的生成是重塑特定文本的预期语法,同时仍然忠实于原始背景。我们迈出了多示例语言建模的第一步,为以英语自动生成五种共同的比喻形式提供了基准。我们培训了MFLAG,在BART之上采用多示例语言预培训计划,并建立了将目标比喻信息输入编码器的机制;这样就可以从另一种比喻形式中生成带有目标比喻形式的文本,而没有平行的比喻式比喻句。我们的方法超越了所有强的基线。我们还对不同语法数字之间的关系进行了一些定性分析和反思。