Artist-drawn sketches only loosely conform to analytical models of perspective projection; the deviation of human-drawn perspective from analytical perspective models is persistent and well documented, but has yet to be algorithmically replicated. We encode this deviation between human and analytic perspectives as a continuous function in 3D space and develop a method to learn it. We seek deviation functions that (i)mimic artist deviation on our training data; (ii)generalize to other shapes; (iii)are consistent across different views of the same shape; and (iv)produce outputs that appear human-drawn. The natural data for learning this deviation is pairs of artist sketches of 3D shapes and best-matching analytical camera views of the same shapes. However, a core challenge in learning perspective deviation is the heterogeneity of human drawing choices, combined with relative data paucity (the datasets we rely on have only a few dozen training pairs). We sidestep this challenge by learning perspective deviation from an individual pair of an artist sketch of a 3D shape and the contours of the same shape rendered from a best-matching analytical camera view. We first match contours of the depicted shape to artist strokes, then learn a spatially continuous local perspective deviation function that modifies the camera perspective projecting the contours to their corresponding strokes. This function retains key geometric properties that artists strive to preserve when depicting 3D content, thus satisfying (i) and (iv) above. We generalize our method to alternative shapes and views (ii, iii) via a self-augmentation approach that algorithmically generates training data for nearby views, and enforces spatial smoothness and consistency across all views. We compare our results to potential alternatives, demonstrating the superiority of the proposed approach.
翻译:艺术家绘制的素描仅松散地符合透视投影的分析模型;人类绘制透视与分析透视模型之间的偏差是持续存在且有充分记录的,但尚未被算法复现。我们将这种人类透视与分析透视之间的偏差编码为三维空间中的连续函数,并开发了一种学习方法。我们寻求的偏差函数需满足:(i)在训练数据上模拟艺术家的偏差;(ii)能泛化到其他形状;(iii)对同一形状的不同视角保持一致性;(iv)生成的结果呈现人类绘制的外观。学习这种偏差的自然数据是艺术家绘制的3D形状素描与同一形状最佳匹配分析相机视角的配对。然而,学习透视偏差的核心挑战在于人类绘制选择的异质性,以及相对的数据稀缺性(我们依赖的数据集仅包含几十个训练对)。我们通过从单个配对——即艺术家绘制的3D形状素描与同一形状在最佳匹配分析相机视角下渲染的轮廓线——中学习透视偏差来规避这一挑战。我们首先将描绘形状的轮廓线与艺术家笔触匹配,然后学习一个空间连续的局部透视偏差函数,该函数修改相机透视以将轮廓线投影至其对应的笔触。此函数保留了艺术家在描绘3D内容时力求保持的关键几何特性,从而满足上述(i)和(iv)的要求。我们通过一种自增强方法将方法泛化到其他形状和视角(ii, iii),该方法算法生成邻近视角的训练数据,并强制所有视角的空间平滑性和一致性。我们将结果与潜在替代方案进行比较,证明了所提方法的优越性。