Convolutional networks are at the center of best-in-class computer vision applications for a wide assortment of undertakings. Since 2014, a profound amount of work began to make better convolutional architectures, yielding generous additions in different benchmarks. Albeit expanded model size and computational cost will, in general, mean prompt quality increases for most undertakings but, the architectures now need to have some additional information to increase the performance. I show evidence that with the amalgamation of content-based image similarity and deep learning models, we can provide the flow of information which can be used in making clustered learning possible. The paper shows how training of sub-dataset clusters not only reduces the cost of computation but also increases the speed of evaluating and tuning a model on the given dataset.
翻译:自2014年以来,大量大量工作开始改善革命结构,在不同的基准中产生慷慨的增加。尽管模型规模和计算成本的扩大一般意味着大多数企业的质量迅速提高,但是,这些结构现在需要获得一些额外信息来提高绩效。 我证明,随着基于内容的图像相似性和深层学习模型的整合,我们可以提供可用于分组学习的信息流。 论文显示,亚数据集集群培训不仅降低了计算成本,而且提高了对特定数据集模型的评估和调整速度。