By leveraging large amounts of product data collected across hundreds of live e-commerce websites, we construct 1000 unique classification tasks that share similarly-structured input data, comprised of both text and images. These classification tasks focus on learning the product hierarchy of different e-commerce websites, causing many of them to be correlated. Adopting a multi-modal transformer model, we solve these tasks in unison using multi-task learning (MTL). Extensive experiments are presented over an initial 100-task dataset to reveal best practices for "large-scale MTL" (i.e., MTL with more than 100 tasks). From these experiments, a final, unified methodology is derived, which is composed of both best practices and new proposals such as DyPa, a simple heuristic for automatically allocating task-specific parameters to tasks that could benefit from extra capacity. Using our large-scale MTL methodology, we successfully train a single model across all 1000 tasks in our dataset while using minimal task specific parameters, thereby showing that it is possible to extend several orders of magnitude beyond current efforts in MTL.
翻译:通过利用在数百个电子商务活网站收集的大量产品数据,我们建立了1,000个独特的分类任务,这些任务由文字和图像组成,由类似结构的投入数据组成。这些分类任务侧重于学习不同电子商务网站的产品等级,导致许多电子商务网站相互关联。采用多模式变压器模型,我们利用多任务学习(MTL)统一解决这些任务。在最初100个任务数据集的基础上进行了广泛的实验,以揭示“大型MTL”(即MTL,任务超过100个)的最佳做法。从这些实验中得出了一种最终的统一方法,由最佳做法和新建议组成,如DyPA, 一种将特定任务参数自动分配给能够从额外能力中受益的任务的简单标准。我们使用大型任务变压器方法,成功地培训了一个涵盖我们数据集中所有1 000项任务的单一模型,同时使用最低任务具体参数,从而表明有可能将几个数量级扩展到MTL目前的努力之外。