Recent advances in deep learning have led to a paradigm shift in the field of reversible steganography. A fundamental pillar of reversible steganography is predictive modelling which can be realised via deep neural networks. However, non-trivial errors exist in inferences about some out-of-distribution and noisy data. In view of this issue, we propose to consider uncertainty in predictive models based upon a theoretical framework of Bayesian deep learning, thereby creating an adaptive steganographic system. Most modern deep-learning models are regarded as deterministic because they only offer predictions while failing to provide uncertainty measurement. Bayesian neural networks bring a probabilistic perspective to deep learning and can be regarded as self-aware intelligent machinery; that is, a machine that knows its own limitations. To quantify uncertainty, we apply Bayesian statistics to model the predictive distribution and approximate it through Monte Carlo sampling with stochastic forward passes. We further show that predictive uncertainty can be disentangled into aleatoric and epistemic uncertainties and these quantities can be learnt unsupervised. Experimental results demonstrate an improvement delivered by Bayesian uncertainty analysis upon steganographic rate-distortion performance.
翻译:最近深层学习的进展导致在可逆的感化学领域出现了范式转变。可逆的感化学的根本支柱是预测性建模,可以通过深神经网络实现。然而,在一些分配外的数据和吵闹的数据的推断中,存在着非三重错误。鉴于这一问题,我们提议考虑基于巴伊西亚深层学习理论框架的预测性模型的不确定性,从而创造一个适应性的感化系统。大多数现代深层学习模型被视为确定性模型,因为它们只能提供预测,而不能提供不确定性的测量。贝伊斯神经网络为深层学习带来一种概率性视角,可以被视为自觉智能机器;这就是,一种了解自身局限性的机器。为了量化不确定性,我们应用贝伊斯统计数据来模拟预测性分布,并通过蒙特卡洛的抽样和透视式前视系统加以比较。我们进一步表明,预测性不确定性可以分解为悬浮性,而不能提供不确定性和这些数量可以学习不精确性。实验性结果显示,通过BayCarloo的不确定性分析结果显示,通过Bay-Colmilliscodical得到的改进。</s>