Visual data such as images and videos are typically modeled as discretizations of inherently continuous, multidimensional signals. Existing continuous-signal models attempt to exploit this fact by modeling the underlying signals of visual (e.g., image) data directly. However, these models have not yet been able to achieve competitive performance on practical vision tasks such as large-scale image and video classification. Building on a recent line of work on deep state space models (SSMs), we propose S4ND, a new multidimensional SSM layer that extends the continuous-signal modeling ability of SSMs to multidimensional data including images and videos. We show that S4ND can model large-scale visual data in $1$D, $2$D, and $3$D as continuous multidimensional signals and demonstrates strong performance by simply swapping Conv2D and self-attention layers with S4ND layers in existing state-of-the-art models. On ImageNet-1k, S4ND exceeds the performance of a Vision Transformer baseline by $1.5\%$ when training with a $1$D sequence of patches, and matches ConvNeXt when modeling images in $2$D. For videos, S4ND improves on an inflated $3$D ConvNeXt in activity classification on HMDB-51 by $4\%$. S4ND implicitly learns global, continuous convolutional kernels that are resolution invariant by construction, providing an inductive bias that enables generalization across multiple resolutions. By developing a simple bandlimiting modification to S4 to overcome aliasing, S4ND achieves strong zero-shot (unseen at training time) resolution performance, outperforming a baseline Conv2D by $40\%$ on CIFAR-10 when trained on $8 \times 8$ and tested on $32 \times 32$ images. When trained with progressive resizing, S4ND comes within $\sim 1\%$ of a high-resolution model while training $22\%$ faster.
翻译:图像和视频等视觉数据通常以具有内在连续性、多维度信号的离散性数据为模型。现有的连续信号模型试图通过直接模拟视觉(例如图像)数据的基本信号来利用这一事实。然而,这些模型尚不能在大型图像和视频分类等实用愿景任务上实现竞争性业绩。基于最近关于深度空间模型(SSMM)的工作线,我们提议S4ND,一个新的多层面SMS层,将SSSSS4的连续信号性能模型能力扩大到包括图像和视频在内的多维数据。我们显示,S4ND能够以$、2美元和3美元作为连续的多视数据模型,作为连续的多维信号。但是,这些模型还没有能够通过将Conv2D和自我感知层与现有状态型空间模型(SSMMS)中S4的S4级水平进行互换。在1D的连续信号模型中,SN5ND的值变换码比值值比值为1.5美元,在1D的序列中,在SNEX的图像中将CR4升级,在S-D的不断更新的SN5,在S-D的分辨率中,在S-D的分辨率中进行S-demodedeal的S-dexxxx的升级,在SD活动上,在S-dexx的演示中,在SD的演示中,在SD的升级一个连续的升级的演示的演示的演示的S4,在S4中,在SD的升级,在S4,在SD活动上,在SDA中进行一个持续的升级的升级的升级的升级,在SDA,在SD活动上进行。