Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs are skill-demanding, time-consuming, and non-scalable to batch production. Although generative models emerge to make design automation no longer utopian, it remains non-trivial to customize designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground contents. In this study, we propose \textit{LayoutDETR} that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout. Experiments validate that our solution yields new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ads banner dataset. For practical usage, we build our solution into a graphical system that facilitates user studies. We demonstrate that our designs attract more subjective preferences than baselines by significant margins. Our code, models, dataset, graphical system, and demos are available at https://github.com/salesforce/LayoutDETR.
翻译:图像布局设计在视觉通信中发挥着必不可少的作用。 然而手工艺布局设计的设计却需要技能、耗费时间和无法对批量生产进行缩放。 虽然出现基因模型使设计自动化设计不再乌托邦式, 出现基因模型使设计自动化不再具有乌托邦式, 但它对于定制符合设计者多式联运愿望的设计, 即受背景图像限制和前景内涵驱动的设计来说, 仍然是非三重性。 在这次研究中, 我们提议 \ textit{ LayoutDETR} 将我们的解决办法建设成一个图形系统, 方便用户研究。 我们证明我们的设计比基线吸引了比重要边距更主观的偏好。 我们的代码、 模型、 数据设置、 图形系统 和 演示工具可以在 http:// gires/ forceals.