The input data from a neural network may be reconstructed using knowledge of the gradients of that network, as demonstrated by \cite{zhu2019deep}. By imposing prior information and utilising a uniform initialization we demonstrate faster and more accurate image reconstruction. Exploring the theoretical limits of reconstruction, we show that a single input may be reconstructed, regardless of network depth using a fully-connected neural network with one hidden node. Then we generalize this result to a gradient averaged over mini-batches of size $B$. In this case, the full mini-batch can be reconstructed if the number of hidden units exceeds $B$, with an orthogonality regularizer to improve the precision. For a Convolutional Neural Network, the required number of filters in the first convolutional layer is decided by multiple factors (e.g., padding, kernel and stride size). Therefore, we require the number of filters, $h\geq (\frac{d}{d^{'}})^2C$, where $d$ is input width, $d^{'}$ is output width after convolution kernel, and $C$ is channel number of input. Finally, we validate our theoretical analysis and improvements using bio-medical (fMRI) and benchmark data (MNIST, Kuzushiji-MNIST, CIFAR100, ImageNet and face images).
翻译:神经网络的输入数据可以通过对网络梯度的了解来重建, 如\ cite{zhu2019deep} 所显示的 。 通过强制提供先前的信息和使用统一的初始化, 我们展示了更快和更准确的图像重建。 探索重建的理论限度, 我们显示, 一个输入可以重建, 不论网络深度如何, 使用一个完全连接的神经网络和一个隐藏节点。 然后我们将这一结果推广到一个平均的跨度, 大小为$B$的微型桶。 在这种情况下, 如果隐藏的单位数量超过$B$, 并且使用一个正方位正态常规化的常规化工具来提高精确度, 就可以重建整个微型批次 。 对于革命神经网络来说, 第一个革命层中所需的过滤器数量是由多种因素决定的( 例如, 挂接, 内嵌和缩放大小 ) 。 因此, 我们需要100 过滤器、 $h\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\