Benefiting from the contiguous representation ability, deep implicit functions can extract the iso-surface of a shape at arbitrary resolution. However, utilizing the neural network with a large number of parameters as the implicit function prevents the generation speed of high-resolution topology because it needs to forward a large number of query points into the network. In this work, we propose TaylorImNet inspired by the Taylor series for implicit 3D shape representation. TaylorImNet exploits a set of discrete expansion points and corresponding Taylor series to model a contiguous implicit shape field. After the expansion points and corresponding coefficients are obtained, our model only needs to calculate the Taylor series to evaluate each point and the number of expansion points is independent of the generating resolution. Based on this representation, our TaylorImNet can achieve a significantly faster generation speed than other baselines. We evaluate our approach on reconstruction tasks from various types of input, and the experimental results demonstrate that our approach can get slightly better performance than existing state-of-the-art baselines while improving the inference speed with a large margin.
翻译:深层隐含功能从毗连代表能力中受益,可以提取任意解决时形状的表层。然而,利用具有大量参数的神经网络,因为隐含功能妨碍高分辨率地形的生成速度,因为它需要将大量查询点传送到网络中。在这项工作中,我们建议泰勒系列TaylimNet以泰勒系列为灵感,进行隐含3D形状的表达。泰勒IMNet利用一组离散扩展点和相应的泰勒系列来模拟一个毗连的隐含形状字段。在获得扩展点和相应系数后,我们的模型只需要计算泰勒系列来评价每个点,而扩展点的数目则独立于生成的解决方案。基于这一表述,我们的泰勒IMNet可以大大加快生成速度。我们从各种投入中评估我们重建任务的方法,实验结果表明,我们的方法可以比现有的最新基线取得稍好一点的绩效,同时以大幅度提高推断速度。