Inferring programs which generate 2D and 3D shapes is important for reverse engineering, editing, and more. Training such inference models is challenging due to the lack of paired (shape, program) data in most domains. A popular approach is to pre-train a model on synthetic data and then fine-tune on real shapes using slow, unstable reinforcement learning. In this paper, we argue that self-training is a viable alternative for fine-tuning such models. Self-training is a semi-supervised learning paradigm where a model assigns pseudo-labels to unlabeled data, and then retrains with (data, pseudo-label) pairs as the new ground truth. We show that for constructive solid geometry and assembly-based modeling, self-training outperforms state-of-the-art reinforcement learning approaches. Additionally, shape program inference has a unique property that circumvents a potential downside of self-training (incorrect pseudo-label assignment): inferred programs are executable. For a given shape from our distribution of interest $\mathbf{x}^*$ and its predicted program $\mathbf{z}$, one can execute $\mathbf{z}$ to obtain a shape $\mathbf{x}$ and train on $(\mathbf{z}, \mathbf{x})$ pairs, rather than $(\mathbf{z}, \mathbf{x}^*)$ pairs. We term this procedure latent execution self training (LEST). We demonstrate that self training infers shape programs with higher shape reconstruction accuracy and converges significantly faster than reinforcement learning approaches, and in some domains, LEST can further improve this performance.
翻译:生成 2D 和 3D 形状的程序在生成 2D 和 3D 形状时很重要 。 由于大多数域缺乏配对( 编程、 程序) 数据, 培训这样的推论模型具有挑战性 。 一种流行的方法是使用缓慢、 不稳定的强化学习方法, 先对合成数据模型进行训练, 然后对真实形状进行微调 。 在本文中, 我们争辩说, 自我培训是微调这类模型的一个可行的替代方法 。 自我培训是一种半监督的学习模式 。 自我培训是一种半监督的学习模式, 模型将假标签指定给未贴标签的数据, 然后再用( 数据、 假标签) 配对的配对( 数据、 编程) 新的地面真相。 我们显示, 具有建设性的固态和基于组装配制模型的模型, 自我培训超越艺术强化学习方法。 此外, 形状的推算有一个独特的属性, 绕过自我培训( 错误的假标签任务) : 假设程序是可执行的 。 从我们的 $\ $x $ x 美元 ( 美元) 美元) 和 美元 ( 美元) 开始 和 美元) 美元 开始 开始 开始 进行 自我 自我 进行 进行 的 自我 的 。