The detection and estimation of sinusoids is a fundamental signal processing task for many applications related to sensing and communications. While algorithms have been proposed for this setting, quantization is a critical, but often ignored modeling effect. In wireless communications, estimation with low resolution data converters is relevant for reduced power consumption in wideband receivers. Similarly, low resolution sampling in imaging and spectrum sensing allows for efficient data collection. In this work, we propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples. We incorporate signal reconstruction internally as domain knowledge within the network to enhance learning and surpass traditional algorithms in mean squared error and Chamfer error. We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions. This threshold provides insight into why neural networks tend to outperform traditional methods and into the learned relationships between the input and output distributions. In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data. We use the learning threshold to explain, in the one-bit case, how our estimators learn to minimize the distributional loss, rather than learn features from the data.
翻译:在无线通信中,低分辨率数据转换器的估算对于宽带接收器的电力消耗量减少具有相关性。同样,成像和频谱遥感中的低分辨率取样有助于高效的数据收集。在这项工作中,我们提议信号网,这是一个神经网络结构,用来检测类固醇的数量,并从分级和四级样本中估计其参数。我们把信号重建作为内部域知识纳入网络,作为网络内的域知识,以加强学习,超越平均平方差和查普尔错误的传统算法。我们引入了最坏的学习门槛,以比较网络的结果与基本数据分布的对比。这个门槛可以洞察了解神经网络为何倾向于超越传统方法,以及输入和输出分布之间的学习关系。在模拟中,我们发现我们的算法总是能够超过三位数据限值,但往往不能超过一位数据的一个阈值。我们使用最差的临界值来学习一位数据,我们用最小值来解释如何从一个比位数据学习损失的临界值,我们从一个比位数据,我们用最小值来学习一个比位数据。