Real-world tasks are largely composed of multiple models, each performing a sub-task in a larger chain of tasks, i.e., using the output from a model as input for another model in a multi-model pipeline. A model like MATRa performs the task of Crosslingual Transliteration in two stages, using English as an intermediate transliteration target when transliterating between two indic languages. We propose a novel distillation technique, EPIK, that condenses two-stage pipelines for hierarchical tasks into a single end-to-end model without compromising performance. This method can create end-to-end models for tasks without needing a dedicated end-to-end dataset, solving the data scarcity problem. The EPIK model has been distilled from the MATra model using this technique of knowledge distillation. The MATra model can perform crosslingual transliteration between 5 languages - English, Hindi, Tamil, Kannada and Bengali. The EPIK model executes the task of transliteration without any intermediate English output while retaining the performance and accuracy of the MATra model. The EPIK model can perform transliteration with an average CER score of 0.015 and average phonetic accuracy of 92.1%. In addition, the average time for execution has reduced by 54.3% as compared to the teacher model and has a similarity score of 97.5% with the teacher encoder. In a few cases, the EPIK model (student model) can outperform the MATra model (teacher model) even though it has been distilled from the MATra model.
翻译:现实世界的任务主要由多个模型组成,每个在更大的任务链中执行一个子任务,即使用一个模型的输出作为多模版管道中另一个模型的输入。像 MATRa 这样的模型分两个阶段执行跨语言翻转任务,在两种英德语言之间转转接时使用英语作为中间转转转目标。我们建议采用一种新的蒸馏技术,即EPIK,将等级任务的两阶段管道压缩成单一端对端模式,而不影响性能。这个方法可以在不需要专用端对端数据集的情况下为任务创建端对端模式,从而解决数据短缺问题。像 MATRa 这样的模型分两个阶段执行跨语言转接任务,用英语、印地语、泰米尔语、肯那达语和孟加拉语等5种教师模式。 EPIK 模型在保留MATra 的性能和准确性模型的同时,可以创建端到端对端对端对端数据交换模型的端到端。 EPIK 3 将平均的等级比值比值比值比值比值比值比值比值为ML5。