Information has exploded on the Internet and mobile with the advent of the big data era. In particular, recommendation systems are widely used to help consumers who struggle to select the best products among such a large amount of information. However, recommendation systems are vulnerable to malicious user biases, such as fake reviews to promote or demote specific products, as well as attacks that steal personal information. Such biases and attacks compromise the fairness of the recommendation model and infringe the privacy of users and systems by distorting data.Recently, deep-learning collaborative filtering recommendation systems have shown to be more vulnerable to this bias. In this position paper, we examine the effects of bias that cause various ethical and social issues, and discuss the need for designing the robust recommendation system for fairness and stability.
翻译:随着大数据时代的到来,信息在互联网上和移动上爆炸,特别是建议系统被广泛用来帮助那些在如此大量信息中难以选择最佳产品的消费者;然而,建议系统容易受到恶意用户偏见的伤害,例如为促进或降色特定产品而进行的假审查,以及窃取个人信息的攻击;这种偏见和攻击损害了建议模式的公正性,并通过扭曲数据侵犯用户和系统的隐私。 最近,深学习合作过滤建议系统显示更容易受到这种偏见的影响。在本立场文件中,我们审查了造成各种伦理和社会问题的偏见的影响,并讨论了为公平和稳定设计强有力的建议系统的必要性。