Neural compression is the application of neural networks and other machine learning methods to data compression. Recent advances in statistical machine learning have opened up new possibilities for data compression, allowing compression algorithms to be learned end-to-end from data using powerful generative models such as normalizing flows, variational autoencoders, diffusion probabilistic models, and generative adversarial networks. The present article aims to introduce this field of research to a broader machine learning audience by reviewing the necessary background in information theory (e.g., entropy coding, rate-distortion theory) and computer vision (e.g., image quality assessment, perceptual metrics), and providing a curated guide through the essential ideas and methods in the literature thus far.
翻译:神经压缩是神经网络和其他机器学习方法对数据压缩的应用,统计机学习的最近进展为数据压缩开辟了新的可能性,使压缩算法能够从使用强大的基因模型(如正常流动、变式自动转换器、传播概率模型和基因对抗网络)的数据中从数据中最终到终端学习,本条款旨在通过审查信息理论(例如,昆虫编码、率扭曲理论)和计算机视觉(例如,图像质量评估、概念度量度)的必要背景,并通过迄今为止文献中的基本思想和方法,向更广泛的机器学习受众介绍这一研究领域。