Many patterns in nature exhibit self-similarity: they can be compactly described via self-referential transformations. Said patterns commonly appear in natural and artificial objects, such as molecules, shorelines, galaxies and even images. In this work, we investigate the role of learning in the automated discovery of self-similarity and in its utilization for downstream tasks. To this end, we design a novel class of implicit operators, Neural Collages, which (1) represent data as the parameters of a self-referential, structured transformation, and (2) employ hypernetworks to amortize the cost of finding these parameters to a single forward pass. We investigate how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation. Neural Collages image compressors are orders of magnitude faster than other self-similarity-based algorithms during encoding and offer compression rates competitive with implicit methods. Finally, we showcase applications of Neural Collages for fractal art and as deep generative models.
翻译:自然中的许多形态都呈现自异性:它们可以通过自我偏好转换来简单描述。 通常在自然和人工物体中出现上述形态, 如分子、 海岸线、 星系甚至图像。 在这项工作中, 我们调查学习在自动发现自异性及其用于下游任务方面的作用。 为此, 我们设计了一个新的隐含操作者类别, 神经组合, 这(1) 代表数据作为自我偏好、 结构转型的参数, (2) 使用超小型网络将找到这些参数的成本重新组合到一个向前传递的通道上。 我们研究如何在包括数据压缩和生成在内的各种任务中利用神经聚合生成的表象。 神经聚合图像压缩器在编码过程中比其他基于自异性的算法速度要快得多, 并提供与隐含方法相竞争的压缩率。 最后, 我们展示了神经聚合组合用于分解艺术和深层基因模型的应用。