The rise of deep neural networks offers new opportunities in optimizing recommender systems. However, optimizing recommender systems using deep neural networks requires delicate architecture fabrication. We propose NASRec, a paradigm that trains a single supernet and efficiently produces abundant models/sub-architectures by weight sharing. To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i.e., search space) to search the full architectures. The supernet incorporates versatile choice of operators and dense connectivity to minimize human efforts for finding priors. The scale and heterogeneity in NASRec impose several challenges, such as training inefficiency, operator-imbalance, and degraded rank correlation. We tackle these challenges by proposing single-operator any-connection sampling, operator-balancing interaction modules, and post-training fine-tuning. Our crafted models, NASRecNet, show promising results on three Click-Through Rates (CTR) prediction benchmarks, indicating that NASRec outperforms both manually designed models and existing NAS methods with state-of-the-art performance. Our work is publicly available at https://github.com/facebookresearch/NasRec.
翻译:深心神经网络的崛起为优化推荐系统提供了新的机会。然而,利用深心神经网络优化推荐系统需要精密的建筑结构制造。我们建议NASRec,这是一个培训单一超级网的范例,通过重量共享高效地生成大量模型/子结构。为了克服建议领域的数据多式和结构差异性挑战,NASRec建立了一个大型超级网(即搜索空间)以搜索完整的结构。超级网包含操作员的多功能选择和密集连接,以尽量减少人类查找前科的努力。NASRec的规模和异质性带来了若干挑战,例如培训效率低下、操作者-平衡和等级关系退化。我们通过推荐单一操作者任何连接取样、操作者-平衡互动模块以及培训后微调来应对这些挑战。我们设计的模型NASRececNet显示三种点击-Trough率预测基准的可喜结果,表明NASRec在手动设计模型和现有NAS/Restasieformace-Or-OURS-Oral-Os-Or-Os-Os-Os-Os-Os-Os-Abs-Os-Or-As-Or-As-As-O_As-As-As-As-As-As-As-As-As-S-S-Aprisal-S-S-S-S-S-S-S-S-S-As-S-S-S-S-S-S)工作手册是现有业绩手册。