Deep Metric Learning (DML) is a prominent field in machine learning with extensive practical applications that concentrate on learning visual similarities. It is known that inputs such as Adversarial Examples (AXs), which follow a distribution different from that of clean data, result in false predictions from DML systems. This paper proposes MDProp, a framework to simultaneously improve the performance of DML models on clean data and inputs following multiple distributions. MDProp utilizes multi-distribution data through an AX generation process while leveraging disentangled learning through multiple batch normalization layers during the training of a DML model. MDProp is the first to generate feature space multi-targeted AXs to perform targeted regularization on the training model's denser embedding space regions, resulting in improved embedding space densities contributing to the improved generalization in the trained models. From a comprehensive experimental analysis, we show that MDProp results in up to 2.95% increased clean data Recall@1 scores and up to 2.12 times increased robustness against different input distributions compared to the conventional methods.
翻译:深米学习(DML)是机器学习的突出领域,其广泛实用应用侧重于学习视觉相似性,众所周知,诸如AXs等投入,其分布不同于清洁数据,导致DML系统的虚假预测。本文提议MDProp,这是一个同时改进DML清洁数据和多种分布后投入模型的性能的框架。MDProp通过AX生成过程利用多分发数据,同时在DML模型培训期间通过多批次正常化层进行分解的学习。MDProp是第一个生成地貌空间多目标AXs,以对培训模型的稠密嵌入空间区域进行有针对性的正规化,从而改进嵌入空间密度,从而促使经过培训的模型更加普遍化。通过全面实验分析,我们发现MDProp的结果是将清洁数据重新点@1分提高2.95%,比常规方法高出2.12倍于不同输入分布的强度。