We propose a robust variant of boosting forest to the various adversarial defense methods, and apply it to enhance the robustness of the deep neural network. We retain the deep network architecture, weights, and middle layer features, then install gradient boosting forest to select the features from each layer of the deep network, and predict the target. For training each decision tree, we propose a novel conservative and greedy trade-off, with consideration for less misprediction instead of pure gain functions, therefore being suboptimal and conservative. We actively increase tree depth to remedy the accuracy with splits in more features, being more greedy in growing tree depth. We propose a new task on 3D face model, whose robustness has not been carefully studied, despite the great security and privacy concerns related to face analytics. We tried a simple attack method on a pure convolutional neural network (CNN) face shape estimator, making it degenerate to only output average face shape with invisible perturbation. Our conservative-greedy boosting forest (CGBF) on face landmark datasets showed a great improvement over original pure deep learning methods under the adversarial attacks.
翻译:我们提出将森林提升为各种对抗性防御方法的稳健变式,并运用它来提高深神经网络的稳健性。我们保留深网络结构、重量和中层特征,然后安装梯度增强森林以从深网络的每个层中选择特征,并预测目标。为了培训每棵决策树,我们提议了一个新的保守和贪婪的权衡,考虑减少误解,而不是纯收益功能,因此不够完美和保守。我们积极提高树深度,以弥补精度,在更多的特征中分裂,在树深度中更加贪婪。我们提出了关于3D面型的新任务,尽管在面对分析方面存在着巨大的安全和隐私问题,但是对3D面型模型的稳健性没有进行认真研究。我们尝试了一种简单的攻击方法,即对纯的革命性神经网络(CNN)面形图案的形状进行攻击,使其退化为仅输出平均面部,且有无形的扰动。我们在面部标志性数据集上的保守-基因增强森林(CGBF)显示,在对抗性攻击下,比原始的纯深层研究方法有了很大的改进。