Freehand sketches exhibit unique sparsity and abstraction, necessitating learning pipelines distinct from those designed for images. For sketch learning methods, the central objective is to fully exploit the effective information embedded in sketches. However, there is limited research on what constitutes effective sketch information, which in turn constrains the performance of existing approaches. To tackle this issue, we first proposed the Multi-Level Sketch Representation Scheme to systematically identify the effective information. The scheme organizes sketch representation into three levels: sketch-level, stroke-level, and point-level. This design is based on the granularity of analytical elements, from coarse (sketch-level) to fine (point-level), thereby ensuring more comprehensive coverage of the sketch information. For each level, we conducted theoretical analyses and experimental evaluations to identify and validate the effective information. Building on the above studies, we developed SDGraph, a deep learning architecture designed to exploit the identified effective information across the three levels. SDGraph comprises two complementary modules: a Sparse Graph that treats strokes as nodes for sketch-level and stroke-level representation learning, and a Dense Graph that treats points as nodes for sketch-level and point-level representation learning. Both modules employ graph convolution along with down-sampling and up-sampling operations, enabling them to function as both encoder and decoder. Besides that, an information fusion module bridges the two graphs to further enhance feature extraction. SDGraph supports a wide range of sketch-related downstream tasks, achieving accuracy improvements of 1.15\% and 1.70\% over the state-of-the-art in classification and retrieval, respectively, and 36.58\% improvement in vector sketch generation quality.
翻译:暂无翻译