Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these sub-problems separately, and often combine them sequentially. We propose a novel convolutional neural network (CNN) based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks - first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our one-stage model is easier to train and significantly faster than existing two-stage or end-to-end methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm.
翻译:自动图像裁剪是一个具有挑战性的任务, 有许多实用的下游应用程序。 任务通常被分为亚问题 — — 生成大量图像集 — — 由有经验的编辑裁剪成的图像集, 找到具有视觉重要性的区域, 并确定用于选择最有吸引力的候选人的审美。 先前的方法将一个或多个子问题分别建模, 并经常按顺序将其组合在一起。 我们建议了一种基于新颖的演进神经网络( CNN) 的方法, 直接绘制图像, 但不明确地建模图像集, 评估多个作物候选者, 也不探测显眼区域。 我们的模型组模型组被培训成大型的图像集成, 并且可以同时预测多个固定质量比例的框组装。 我们认为, 裁剪裁图像的侧面比例是一个关键因素, 影响审美观。 之前的自动图像裁剪裁方法没有执行产出的侧面比例, 可能是因为缺少用于这项工作的数据集。 因此, 我们用公共模型集成型的方法来测量两个相关任务 - 首先, 美观图像的精度可以明确保留方比例, 第二, 缩组的生成需要固定的图像组组合需要固定的矩阵比一个模型组装的模型比当前更快速的模型组, 但是, 我们的模型组造图比现有的方法都不重要。