User-generated social media data is constantly changing as new trends influence online discussion, causing distribution shift in test data for social media NLP applications. In addition, training data is often subject to change as user data is deleted. Most current NLP systems are static and rely on fixed training data. As a result, they are unable to adapt to temporal change -- both test distribution shift and deleted training data -- without frequent, costly re-training. In this paper, we study temporal adaptation through the task of longitudinal hashtag prediction and propose a non-parametric technique as a simple but effective solution: non-parametric classifiers use datastores which can be updated, either to adapt to test distribution shift or training data deletion, without re-training. We release a new benchmark dataset comprised of 7.13M Tweets from 2021, along with their hashtags, broken into consecutive temporal buckets. We compare parametric neural hashtag classification and hashtag generation models, which need re-training for adaptation, with a non-parametric, training-free dense retrieval method that returns the nearest neighbor's hashtags based on text embedding distance. In experiments on our longitudinal Twitter dataset we find that dense nearest neighbor retrieval has a relative performance gain of 64.12% over the best parametric baseline on test sets that exhibit distribution shift without requiring gradient-based re-training. Furthermore, we show that our datastore approach is particularly well-suited to dynamically deleted user data, with negligible computational cost and performance loss. Our novel benchmark dataset and empirical analysis can support future inquiry into the important challenges presented by temporality in the deployment of AI systems on real-world user data.
翻译:用户产生的社交媒体数据随着新趋势影响在线讨论而不断变化,导致社交媒体 NLP 应用程序测试数据的分布变化。此外,培训数据往往随着用户数据被删除而发生变化。目前的大多数NLP系统都是静态的,依靠固定的培训数据。因此,它们无法适应时间变化 -- -- 测试分布变化和删除的培训数据 -- -- 没有频繁、昂贵的再培训,它们无法适应时间变化 -- -- 测试分布变化和删除培训数据 -- -- 没有频繁、昂贵的再培训。在本文中,我们通过纵向标签预测研究时间适应性,提出非参数技术,作为一种简单而有效的解决方案:非参数分类师使用可以更新的数据储存数据存储数据,或者适应测试分发变化或培训数据删除,不进行再培训。我们发布了一套新的基准数据集,包括2021年的7.13M Tweets及其标签,没有被打破连续的时间桶。我们比较了神经标签分类分类和标签生成模型,这些模型需要重新培训适应,同时采用非参数性、无培训性密集的检索方法,将最近的邻居标签检索方法以文本存储远程为基础,或者培训数据删除式数据删除式分析。在远程实验中,我们进行最接近的用户的升级的Twifreal dal deal deal deal deal deal deal deal deal deal dealmental deal deal deal deal deal deal deal deal deal deal deal deal deal deal deal deal ex ex ex ex ex ex ex exmental ex ex ta 。我们发现了一个最接近性能性能测试了比前的数据,我们最接近性能性能性能性能性能性能性能性能性能测试了比更rodmentaltra 。