Real-world data often exhibit imbalanced distributions, where certain target values have significantly fewer observations. Existing techniques for dealing with imbalanced data focus on targets with categorical indices, i.e., different classes. However, many tasks involve continuous targets, where hard boundaries between classes do not exist. We define Deep Imbalanced Regression (DIR) as learning from such imbalanced data with continuous targets, dealing with potential missing data for certain target values, and generalizing to the entire target range. Motivated by the intrinsic difference between categorical and continuous label space, we propose distribution smoothing for both labels and features, which explicitly acknowledges the effects of nearby targets, and calibrates both label and learned feature distributions. We curate and benchmark large-scale DIR datasets from common real-world tasks in computer vision, natural language processing, and healthcare domains. Extensive experiments verify the superior performance of our strategies. Our work fills the gap in benchmarks and techniques for practical imbalanced regression problems. Code and data are available at https://github.com/YyzHarry/imbalanced-regression.
翻译:处理不平衡数据的现有技术侧重于具有绝对指数的目标,即不同类别。然而,许多任务涉及连续的目标,而不同类别之间没有硬的界限。我们定义深平衡回归(DIR)为从这种不平衡数据中学习,具有连续目标,处理某些目标值中潜在的缺失数据,并推广到整个目标范围。受绝对和连续标签空间之间内在差异的驱使,我们提议对明确承认附近目标影响的标签和特征进行平稳分布,并对标签和特征分布进行校准。我们从计算机愿景、自然语言处理和保健领域共同的现实世界任务、大规模DIR数据集进行整理和基准。我们的工作填补了实际不平衡回归问题的基准和技术差距。我们的工作可以在https://github.com/YyzHarry/immecal-regresion上查阅代码和数据。