Min-Nets are inspired by end-stopped cortical cells with units that output the minimum of two learned filters. We insert such Min-units into state-of-the-art deep networks, such as the popular ResNet and DenseNet, and show that the resulting Min-Nets perform better on the Cifar-10 benchmark. Moreover, we show that Min-Nets are more robust against JPEG compression artifacts. We argue that the minimum operation is the simplest way of implementing an AND operation on pairs of filters and that such AND operations introduce a bias that is appropriate given the statistics of natural images.
翻译:Min-Net的灵感来自终端截断的皮质细胞,其单元输出最少两个已学过过滤器。我们将这种底质细胞插入最先进的深层网络,如流行的ResNet和DenseNet, 并表明由此形成的Min-Net在Cifar-10基准上表现更好。此外,我们证明Min-Net对JPEG压缩文物的抗力更强。我们争辩说,最小操作是执行和操作过滤器组合的最简单的方式,而且根据自然图像的统计,这种操作和操作会引入一种适当的偏差。