This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.
翻译:本文介绍了AILAB-Udine小组为SMM4H 22共同任务开发的模式,我们探讨了基于文本分类、实体提取和实体正常化、处理任务1、2、5、6和10的变异模型的局限性,我们从参与不同任务中获得的主要利益是:在使用共同学习时将不同结构结合起来所产生的压倒性的积极影响,以及变异模型在术语正常化方面的巨大潜力。