The fundamental limit of Semantic Communications (joint source-channel coding) is established when the transmission needs to be kept covert from an external warden. We derive information-theoretic achievability and matching converse results and we show that source and channel coding separation holds for this setup. Furthermore, we show through an experimental setup that one can train a deep neural network to achieve covert semantic communication for the classification task. Our numerical experiments confirm our theoretical findings, which indicate that for reliable joint source-channel coding the number of transmitted source symbols can only scale as the square-root of the number of channel uses.
翻译:暂无翻译