This paper investigates distributed joint source-channel coding (JSCC) for correlated image semantic transmission over wireless channels. In this setup, correlated images at different transmitters are separately encoded and transmitted through dedicated channels for joint recovery at the receiver. We propose a novel distributed nonlinear transform source-channel coding (D-NTSCC) framework. Unlike existing learning-based approaches that implicitly learn source correlation in a purely data-driven manner, our method explicitly models the source correlation through joint distribution. Specifically, the correlated images are separately encoded into latent representations via an encoding transform function, followed by a JSCC encoder to produce channel input symbols. A learned joint entropy model is introduced to determine the transmission rates, which more accurately approximates the joint distribution of the latent representations and captures source dependencies, thereby improving rate-distortion performance. At the receiver, a JSCC decoder and a decoding transform function reconstruct the images from the received signals, each serving as side information for recovering the other image. Therein, a transformation module is designed to align the latent representations for maximal correlation learning. Furthermore, a loss function is derived to jointly optimize encoding, decoding, and the joint entropy model, ensuring that the learned joint entropy model approximates the true joint distribution. Experiments on multi-view datasets show that D-NTSCC outperforms state-of-the-art distributed schemes, demonstrating its effectiveness in exploiting source correlation.
翻译:暂无翻译