We show that it is possible to predict which deep network has generated a given logit vector with accuracy well above chance. We utilize a number of networks on a dataset, initialized with random weights or pretrained weights, as well as fine-tuned networks. A classifier is then trained on the logit vectors of the trained set of this dataset to map the logit vector to the network index that has generated it. The classifier is then evaluated on the test set of the dataset. Results are better with randomly initialized networks, but also generalize to pretrained networks as well as fine-tuned ones. Classification accuracy is higher using unnormalized logits than normalized ones. We find that there is little transfer when applying a classifier to the same networks but with different sets of weights. In addition to help better understand deep networks and the way they encode uncertainty, we anticipate our finding to be useful in some applications (e.g. tailoring an adversarial attack for a certain type of network). Code is available at https://github.com/aliborji/logits.
翻译:我们显示,可以预测哪个深网络生成了特定对数矢量,精确度要高得多。 我们使用一个数据集上的若干网络,先用随机加权数或预加工加权数初始化,以及微调网络。 然后对一个分类员进行关于经过训练的数据集的对数矢量的培训,以将登录矢量映射到生成该数据集的网络索引中。然后在数据集的测试集上对分类员进行评估。结果与随机初始化的网络比较好,但也普遍适用于预先训练的网络和微调的网络。使用非常规的对数登录数比正常的对数要高。我们发现,将分类员应用到同一网络时几乎没有什么转移,但有不同的加权数。除了帮助更好地了解深度网络和它们编码不确定性的方式外,我们预计我们的调查结果在某些应用中有用(例如,为某类网络定制对抗性攻击)。 代码可在https://github.com/aliborji/logits处查阅。