Storing tabular data in a way that balances storage and query efficiencies is a long standing research question in the database community. While there are several lossless compression techniques in the literature, in this work we argue and show that a novel Deep Learned Data Mapping (or DeepMapping) abstraction, which relies on the impressive memorization capabilities of deep neural networks, can provide better storage cost, better latency, and better run-time memory footprint, all at the same time. Our proposed DeepMapping abstraction transforms a data set into multiple key-value mappings and constructs a multi-tasking neural network model that outputs the corresponding values for a given input key. In order to deal with the memorization errors, DeepMapping couples the learned neural network with a light-weight auxiliary data structure capable of correcting errors. The auxiliary structure further enables DeepMapping to efficiently deal with insertions, deletions, and updates, without having to re-train the mapping. Since the shape of the network has a significant impact on the overall size of the DeepMapping structure, we further propose a multi-task hybrid architecture search strategy to identify DeepMapping architectures that strike a desirable balance among memorization capacity, size, and efficiency. Extensive experiments with synthetic and benchmark datasets, including TPC-H and TPC-DS, demonstrated that the proposed DeepMapping approach can significantly reduce the latency of the key-based queries, while simultaneously improving both offline and run-time storage requirements against several cutting-edge competitors.
翻译:暂无翻译