Convolutional networks are at the center of best in class computer vision applications for a wide assortment of undertakings. Since 2014, profound amount of work began to make better convolutional architectures, yielding generous additions in different benchmarks. Albeit expanded model size and computational cost will, in general, mean prompt quality increases for most undertakings but, the architectures now need to have some additional information to increase the performance. We show empirical evidence that with the amalgamation of content-based image similarity and deep learning models, we can provide the flow of information which can be used in making clustered learning possible. We show how parallel training of sub-dataset clusters not only reduces the cost of computation but also increases the benchmark accuracies by 5-11 percent.
翻译:革命网络在一系列广泛的企业的阶级计算机视觉应用中处于最佳中心位置。 2014年以来,大量工作开始改善革命结构,在不同的基准中产生慷慨的增加。尽管模型规模和计算成本的扩大一般意味着大多数企业的质量迅速提高,但是,这些建筑现在需要获得一些额外信息来提高绩效。我们展示了经验证据,证明随着内容图像相似性和深层学习模型的合并,我们可以提供可用于集群学习的信息流。我们展示了子数据集集群的平行培训不仅会降低计算成本,还会将基准理解率提高5-11%。