Recent advances in deep learning have led to a paradigm shift in reversible steganography. A fundamental pillar of reversible steganography is predictive modelling which can be realised via deep neural networks. However, non-trivial errors exist in inferences about some out-of-distribution and noisy data. In view of this issue, we propose to consider uncertainty in predictive models based upon a theoretical framework of Bayesian deep learning. Bayesian neural networks can be regarded as self-aware machinery; that is, a machine that knows its own limitations. To quantify uncertainty, we approximate the posterior predictive distribution through Monte Carlo sampling with stochastic forward passes. We further show that predictive uncertainty can be disentangled into aleatoric and epistemic uncertainties and these quantities can be learnt in an unsupervised manner. Experimental results demonstrate an improvement delivered by Bayesian uncertainty analysis upon steganographic capacity-distortion performance.
翻译:最近深层学习的进展导致了可逆成形学的范式转变。可逆成形学的一个基本支柱是预测模型,可以通过深神经网络实现。然而,一些分流和杂音数据存在非三边错误。鉴于这一问题,我们提议考虑基于巴耶斯深层学习理论框架的预测模型的不确定性。贝耶斯神经网络可被视为自觉机械,即一个了解自身局限性的机器。为了量化不确定性,我们用先入之见的前路将蒙特卡洛取样的远端预测分布比作近。我们进一步表明,预测不确定性可以分解成偏移和传异的不确定性,这些不确定性的数量可以不经监督地学习。实验结果表明,巴耶斯的不确定性分析对感化能力扭曲性能进行了改进。