Privacy assistants help users manage their privacy online. Their tasks could vary from detecting privacy violations to recommending sharing actions for content that the user intends to share. Recent work on these tasks are promising and show that privacy assistants can successfully tackle them. However, for such privacy assistants to be employed by users, it is important that these assistants can explain their decisions to users. Accordingly, this paper develops a methodology to create explanations of privacy. The methodology is based on identifying important topics in a domain of interest, providing explanation schemes for decisions, and generating them automatically. We apply our proposed methodology on a real-world privacy data set, which contains images labeled as private or public to explain the labels. We evaluate our approach on a user study that depicts what factors are influential for users to find explanations useful.
翻译:隐私助理帮助用户在网上管理隐私。他们的任务可能各不相同,从发现侵犯隐私行为到建议分享与用户打算分享的内容有关的行动。最近关于这些任务的工作很有希望,并表明隐私助理能够成功地解决这些问题。然而,对于用户使用这些隐私助理来说,重要的是这些助理能够向用户解释他们的决定。因此,本文件开发了一种解释隐私的方法。这个方法的基础是确定一个感兴趣的领域的重要议题,为决定提供解释方案,并自动生成这些议题。我们把我们提出的方法应用于一个真实世界隐私数据集,其中包含贴有私人或公共标签的图像来解释标签。我们评估了我们关于用户研究的方法,该研究描述了哪些因素对用户找到有用的解释有影响。