Learning effective feature interactions is crucial for click-through rate (CTR) prediction tasks in recommender systems. In most of the existing deep learning models, feature interactions are either manually designed or simply enumerated. However, enumerating all feature interactions brings large memory and computation cost. Even worse, useless interactions may introduce unnecessary noise and complicate the training process. In this work, we propose a two-stage algorithm called Automatic Feature Interaction Selection (AutoFIS). AutoFIS can automatically identify all the important feature interactions for factorization models with just the computational cost equivalent to training the target model to convergence. In the \emph{search stage}, instead of searching over a discrete set of candidate feature interactions, we relax the choices to be continuous by introducing the architecture parameters. By implementing a regularized optimizer over the architecture parameters, the model can automatically identify and remove the redundant feature interactions during the training process of the model. In the \emph{re-train stage}, we keep the architecture parameters serving as an attention unit to further boost the performance. Offline experiments on three large-scale datasets (two public benchmarks, one private) demonstrate that the proposed AutoFIS can significantly improve various FM based models. AutoFIS has been deployed onto the training platform of Huawei App Store recommendation service, where a 10-day online A/B test demonstrated that AutoFIS improved the DeepFM model by 20.3\% and 20.1\% in terms of CTR and CVR respectively.
翻译:学习有效特征互动对于在推荐人系统中点击通速率(CTR)的预测任务至关重要。在大多数现有的深层次学习模型中,特征互动要么是手工设计,要么只是简单列举。然而,列举所有特征互动都会带来大量记忆和计算成本。更糟糕的是,无用的互动可能会带来不必要的噪音,使培训过程复杂化。在这项工作中,我们提议了一个名为自动特征互动选择(AutoFIS)的两阶段算法。AutoFIS可以自动识别乘数模型的所有重要特征互动,其计算成本仅相当于培训目标模式的趋同。在\emph{{search stage} 中,对三个大型数据集(两个公共基准,一个私有)进行离散的候选功能互动,我们通过引入架构参数来放松选择,以持续进行选择。如果对架构参数实施常规化的优化,该模型可以自动识别并消除模式培训过程中的冗余特征互动。在\emph{rea-train阶段,我们保留架构参数作为关注单位,以进一步提升业绩。在三个大型数据集(两个公共基准,一个私人基准)上进行在线实验,我们通过引入了AFAFIFIFF 10号测试服务。