For $\tilde{f}(t) = \exp(\frac{\alpha-1}{\alpha}t)$, this paper proposes a $\tilde{f}$-mean information gain measure. R\'{e}nyi divergence is shown to be the maximum $\tilde{f}$-mean information gain incurred at each elementary event $y$ of channel output $Y$ and Sibson mutual information is the $\tilde{f}$-mean of this $Y$-elementary information gain. Both are proposed as $\alpha$-leakage measures, indicating the most information an adversary can obtain on sensitive data. It is shown that the existing $\alpha$-leakage by Arimoto mutual information can be expressed as $\tilde{f}$-mean measures by a scaled probability. Further, Sibson mutual information is interpreted as the maximum $\tilde{f}$-mean information gain over all estimation decisions applied to channel output. This reveals that the exiting generalized Blahut-Arimoto method for computing R\'{e}nyi capacity (or Gallager's error exponent) in fact maximizes a $\tilde{f}$-mean information gain iteratively over estimation decision and channel input. This paper also derives a decomposition of $\tilde{f}$-mean information gain, analogous to the Sibson identity for R\'{e}nyi divergence.
翻译:暂无翻译