Torchattacks is a PyTorch library that contains adversarial attacks to generate adversarial examples and to verify the robustness of deep learning models. The code can be found at https://github.com/Harry24k/adversarial-attacks-pytorch.
翻译:火炬攻击是一个PyTorrch图书馆,载有对抗性攻击,以生成对抗性例子,并核实深层学习模式的健全性,该守则可在https://github.com/Harry24k/对抗性攻击-pytorch查阅。