We propose a stealthy clean-label video backdoor attack against Deep Learning (DL)-based models aiming at detecting a particular class of spoofing attacks, namely video rebroadcast attacks. The injected backdoor does not affect spoofing detection in normal conditions, but induces a misclassification in the presence of a specific triggering signal. The proposed backdoor relies on a temporal trigger altering the average chrominance of the video sequence. The backdoor signal is designed by taking into account the peculiarities of the Human Visual System (HVS) to reduce the visibility of the trigger, thus increasing the stealthiness of the backdoor. To force the network to look at the presence of the trigger in the challenging clean-label scenario, we choose the poisoned samples used for the injection of the backdoor following a so-called Outlier Poisoning Strategy (OPS). According to OPS, the triggering signal is inserted in the training samples that the network finds more difficult to classify. The effectiveness of the proposed backdoor attack and its generality are validated experimentally on different datasets and anti-spoofing rebroadcast detection architectures.
翻译:我们建议对深层学习(DL)的隐性清洁标签后门攻击进行隐形的清洁视频后门攻击,目的是探测某类的潜伏攻击,即视频转播攻击。注射后门并不影响正常情况下的潜伏探测,而是在出现特定的触发信号时引起错误分类。提议的后门依靠一个时间触发器来改变视频序列的平均染色度。后门信号的设计考虑到人类视觉系统的特殊性,以减少触发器的可见度,从而增加后门的隐形性。为了迫使网络在具有挑战性的清洁标签情景下查看触发器的存在,我们选择了在所谓的外部中毒战略(OPS)之后用于后门注射的有毒样品。根据OPS,触发信号插入到培训样本中,而该网络发现这些样本更难分类。提议的后门攻击及其一般性的效果在不同的数据集和反渗漏后门探测结构上得到了实验性验证。