This paper summarizes the joint participation of the Trading Central Labs and the L3i laboratory of the University of La Rochelle on both sub-tasks of the Shared Task FinSim-4 evaluation campaign. The first sub-task aims to enrich the 'Fortia ESG taxonomy' with new lexicon entries while the second one aims to classify sentences to either 'sustainable' or 'unsustainable' with respect to ESG (Environment, Social and Governance) related factors. For the first sub-task, we proposed a model based on pre-trained Sentence-BERT models to project sentences and concepts in a common space in order to better represent ESG concepts. The official task results show that our system yields a significant performance improvement compared to the baseline and outperforms all other submissions on the first sub-task. For the second sub-task, we combine the RoBERTa model with a feed-forward multi-layer perceptron in order to extract the context of sentences and classify them. Our model achieved high accuracy scores (over 92%) and was ranked among the top 5 systems.
翻译:本文总结了拉罗歇尔大学贸易中央实验室和L3i实验室共同参与共同任务FinSim-4评价运动的两个子任务的情况。第一个子任务旨在用新的词汇条目来丰富“Fortia ESG分类学”和新的词汇条目,而第二个小任务旨在将ESG(环境、社会和治理)相关因素的句子分为“可持续”或“不可持续”的句子。关于第一个子任务,我们提出了一个基于培训前判决-BERT模式的模式,以便在共同空间中预测句子和概念,以更好地代表ESG概念。正式任务结果显示,与基线相比,我们的系统取得了显著的绩效改进,并超越了第一个子任务中的所有其他呈件。关于第二个子任务,我们把RoBERTA模型与进进式前多层概念结合起来,以便提取判刑的背景并将其分类。我们的模式达到了高精度分数(超过92%),并排在前5个系统中。