In the era of billion-parameter-sized Language Models (LMs), start-ups have to follow trends and adapt their technology accordingly. Nonetheless, there are open challenges since the development and deployment of large models comes with a need for high computational resources and has economical consequences. In this work, we follow the steps of the R&D group of a modern legal-tech start-up and present important insights on model development and deployment. We start from ground zero by pre-training multiple domain-specific multi-lingual LMs which are a better fit to contractual and regulatory text compared to the available alternatives (XLM-R). We present benchmark results of such models in a half-public half-private legal benchmark comprising 5 downstream tasks showing the impact of larger model size. Lastly, we examine the impact of a full-scale pipeline for model compression which includes: a) Parameter Pruning, b) Knowledge Distillation, and c) Quantization: The resulting models are much more efficient without sacrificing performance at large.
翻译:然而,由于大型模型的开发和部署需要大量计算资源,并产生经济后果,因此存在一些公开的挑战。在这项工作中,我们遵循现代法律-技术启动的研发小组的步骤,并对模型的开发和部署提出重要见解。我们从零开始,先从实地开始,先培训多领域专用多语言的多语言模块,这些模块比现有的替代品(XLM-R)更适合合同和监管文本。我们以半公立半私营法律基准列出这些模型的基准结果,包括显示更大模型规模影响的5个下游任务。最后,我们研究模型压缩全面管道的影响,其中包括:(a) Parameter Pruning,(b) 知识蒸馏,以及(c) 量化:由此产生的模型在不牺牲总体绩效的情况下效率要高得多。