Phishing and disinformation are popular social engineering attacks with attackers invariably applying influence cues in texts to make them more appealing to users. We introduce Lumen, a learning-based framework that exposes influence cues in text: (i) persuasion, (ii) framing, (iii) emotion, (iv) objectivity/subjectivity, (v) guilt/blame, and (vi) use of emphasis. Lumen was trained with a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.
翻译:欺骗和虚假信息是广受欢迎的社会工程攻击,攻击者总是在文本中运用影响提示,使其对用户更有吸引力。我们引入了Lumen,这是一个基于学习的框架,在文本中暴露影响提示:(一) 说服,(二) 构建,(三) 情感,(四) 客观性/主观性,(五) 内疚/指责,以及(六) 使用重点。Lumen接受了新开发的3K文本数据集的培训,其中包括虚假信息、钓鱼、超党派新闻和主流新闻。对Lumen的评价与其他学习模型相比,Lumen和LSTM提供了最好的F1-微分,但Lumen获得了更好的解释性。我们的结果突出表明,ML承诺在文本中暴露影响提示,目的是在自动标签工具中应用提高人基检测的准确性,减少用户在欺骗性在线内容上跌落的可能性。