In this paper, we introduce data multiplexing (DataMUX), a technique that enables deep neural networks to process multiple inputs simultaneously using a single compact representation. DataMUX demonstrates that neural networks are capable of generating accurate predictions over mixtures of inputs, resulting in increased throughput with minimal extra memory requirements. Our approach uses two key components -- 1) a multiplexing layer that performs a fixed linear transformation to each input before combining them to create a mixed representation of the same size as a single input, which is then processed by the base network, and 2) a demultiplexing layer that converts the base network's output back into independent representations before producing predictions for each input. We show the viability of DataMUX for different architectures (Transformers, and to a lesser extent MLPs and CNNs) across six different tasks spanning sentence classification, named entity recognition and image classification. For instance, DataMUX for Transformers can multiplex up to $20$x/$40$x inputs, achieving $11$x/$18$x increase in throughput with minimal absolute performance drops of $<2\%$ and $<4\%$ respectively on MNLI, a natural language inference task. We also provide a theoretical construction for multiplexing in self-attention networks and analyze the effect of various design elements in DataMUX.
翻译:在本文中,我们引入了数据多重化(DataMUX)这一技术,使深神经网络能够使用单一压缩表示法同时处理多个输入;数据MUX表明,神经网络能够对投入的混合进行准确预测,从而增加吞吐量,同时尽量减少内存要求。我们的方法使用两个关键组成部分:1)一个多氧化层,对每项输入进行固定线性转换,然后将其合并,形成一个与单一输入相同的混合表示法,然后由基网络处理;2)一个解混合层,将基础网络的输出转换为独立表示法,然后对每项输入作出预测。我们展示了数据MUX对不同结构(变异体,以及低程度的MLPs和CNNS)的可行性,这六个任务跨越了句号分类、名称实体识别和图像分类。例如,变换器的数据MUX可以乘以20美元/40美元x输入法,然后由基网络处理;以及2)一个将基网络的输出量增加11美元/18美元,然后将基网络的绝对性表现下降1美元,然后又用美元/4美元/4美元/xxx值计算出各种输入系统。我们在各种设计中,还分别提供各种数据任务设计中的数据-AMLIS。