Where performance of text classification models drops over time due to changes in data, development of models whose performance persists over time is important. An ability to predict a model's ability to persist over time can help design models that can be effectively used over a longer period of time. In this paper, we look at this problem from a practical perspective by assessing the ability of a wide range of language models and classification algorithms to persist over time, as well as how dataset characteristics can help predict the temporal stability of different models. We perform longitudinal classification experiments on three datasets spanning between 6 and 19 years, and involving diverse tasks and types of data. We find that one can estimate how a model will retain its performance over time based on (i) how well the model performs over a restricted time period and its extrapolation to a longer time period, and (ii) the linguistic characteristics of the dataset, such as the familiarity score between subsets from different years. Findings from these experiments have important implications for the design of text classification models with the aim of preserving performance over time.
翻译:当文本分类模型的性能因数据变化而逐渐下降时,开发其性能随时间推移而持续下去的模型是很重要的。预测模型长期持续下去的能力能够有助于设计能够在较长时期内有效使用的模型。在本文件中,我们从实际角度审视这一问题,方法是评估多种语言模型和分类算法随时间推移而持续下去的能力,以及数据集特性如何有助于预测不同模型的时间稳定性。我们对涵盖6至19年的三套数据集进行了纵向分类实验,并涉及不同的任务和数据类型。我们发现,可以根据(一) 模型在一定时期内的运行情况及其外推至较长时期的情况,以及(二) 数据集的语言特点,例如不同年份子群之间的熟悉度分数。这些实验的结果对文本分类模型的设计具有重要影响,目的是保持一段时间的性能。