Work on fast weight programmers has demonstrated the effectiveness of key/value outer product-based learning rules for sequentially generating a weight matrix (WM) of a neural net (NN) by another NN or itself. However, the weight generation steps are typically not visually interpretable by humans, because the contents stored in the WM of an NN are not. Here we apply the same principle to generate natural images. The resulting fast weight painters (FPAs) learn to execute sequences of delta learning rules to sequentially generate images as sums of outer products of self-invented keys and values, one rank at a time, as if each image was a WM of an NN. We train our FPAs in the generative adversarial networks framework, and evaluate on various image datasets. We show how these generic learning rules can generate images with respectable visual quality without any explicit inductive bias for images. While the performance largely lags behind the one of specialised state-of-the-art image generators, our approach allows for visualising how synaptic learning rules iteratively produce complex connection patterns, yielding human-interpretable meaningful images. Finally, we also show that an additional convolutional U-Net (now popular in diffusion models) at the output of an FPA can learn one-step "denoising" of FPA-generated images to enhance their quality. Our code is public.
翻译:快速权重程序仪的工作展示了关键/价值外源产品学习规则对另一个NN或自身依次生成神经网重力矩阵(WM)的有效性。 但是,重力生成步骤通常不是人类可以视觉解读的, 因为保存在 NN 的 WM 中的内容不是。 我们在这里应用同样的原则来生成自然图像。 由此产生的快速权重画家( FPAs) 学会执行三角洲学习规则的序列, 以相继生成图像, 作为自我发明的钥匙和价值的外部产品的总和, 一次排序, 就像每张图像都是 NN 的WM一样。 我们用基因对抗网络框架来培训我们的FPA 。 我们展示这些通用学习规则如何在没有对图像产生明显的感官偏好偏差的情况下生成可尊重的图像 。 虽然由此产生的快速权重画家( FPA) 学习功能基本上落后于专门化的状态图像生成者, 我们的方法允许视觉合成学习规则如何反复生成复杂的连接模式, 并生成各种图像数据集 。 最后, 将人类- 升级的图像 学习一个新的质量模型 。</s>