Recent work suggests that representations learned by adversarially robust networks are more human perceptually-aligned than non-robust networks via image manipulations. Despite appearing closer to human visual perception, it is unclear if the constraints in robust DNN representations match biological constraints found in human vision. Human vision seems to rely on texture-based/summary statistic representations in the periphery, which have been shown to explain phenomena such as crowding and performance on visual search tasks. To understand how adversarially robust optimizations/representations compare to human vision, we performed a psychophysics experiment using a set of metameric discrimination tasks where we evaluated how well human observers could distinguish between images synthesized to match adversarially robust representations compared to non-robust representations and a texture synthesis model of peripheral vision (Texforms). We found that the discriminability of robust representation and texture model images decreased to near chance performance as stimuli were presented farther in the periphery. Moreover, performance on robust and texture-model images showed similar trends within participants, while performance on non-robust representations changed minimally across the visual field. These results together suggest that (1) adversarially robust representations capture peripheral computation better than non-robust representations and (2) robust representations capture peripheral computation similar to current state-of-the-art texture peripheral vision models. More broadly, our findings support the idea that localized texture summary statistic representations may drive human invariance to adversarial perturbations and that the incorporation of such representations in DNNs could give rise to useful properties like adversarial robustness.
翻译:近期的工作表明,通过对图像进行操纵,对敌对强势网络所学的表述方式比对立网络所学的描述方式更具有人性的认知性,而不是非野蛮化网络所学的描述方式。尽管看起来更接近于人的视觉感知,但尚不清楚强势DNN代表方式的制约因素是否与人类视觉中的生物限制相符。人类视觉似乎依赖于外围地带基于质的/摘要统计形式,这已经证明可以解释视觉搜索任务中挤集和性能等现象。为了了解与人类视觉相比,对立势强势的优化/代表方式如何比对立,我们利用一套美式歧视任务进行了心理物理学实验,我们评估了人类观察者能够将强势化的图像与强势性代表方式相比,合成了强势性形象。 我们发现,强势代表性和质性模型的可更接近性能性能。 此外,强势和质性模型的性能在参与者中表现出相似性能趋势,在视觉实地代表形式上的表现比强。