Sparse modeling for signal processing and machine learning has been at the focus of scientific research for over two decades. Among others, supervised sparsity-aware learning comprises two major paths paved by: a) discriminative methods and b) generative methods. The latter, more widely known as Bayesian methods, enable uncertainty evaluation w.r.t. the performed predictions. Furthermore, they can better exploit related prior information and naturally introduce robustness into the model, due to their unique capacity to marginalize out uncertainties related to the parameter estimates. Moreover, hyper-parameters associated with the adopted priors can be learnt via the training data. To implement sparsity-aware learning, the crucial point lies in the choice of the function regularizer for discriminative methods and the choice of the prior distribution for Bayesian learning. Over the last decade or so, due to the intense research on deep learning, emphasis has been put on discriminative techniques. However, a come back of Bayesian methods is taking place that sheds new light on the design of deep neural networks, which also establish firm links with Bayesian models and inspire new paths for unsupervised learning, such as Bayesian tensor decomposition. The goal of this article is two-fold. First, to review, in a unified way, some recent advances in incorporating sparsity-promoting priors into three highly popular data modeling tools, namely deep neural networks, Gaussian processes, and tensor decomposition. Second, to review their associated inference techniques from different aspects, including: evidence maximization via optimization and variational inference methods. Challenges such as small data dilemma, automatic model structure search, and natural prediction uncertainty evaluation are also discussed. Typical signal processing and machine learning tasks are demonstrated.
翻译:信号处理和机器学习的模型失真是20多年来科学研究的焦点。此外,与被采纳的前几个阶段相关的超常参数可以通过培训数据来学习。要实施超常认知学习,关键点在于选择用于歧视方法的功能调节器和用于贝叶斯人学习的先前分配方法。在过去10年左右,由于深入学习的深入研究,重点已经放在了歧视技术上。但是,由于贝伊斯模型的回溯式正在形成,从与参数估计有关的不确定性边缘化。此外,与被采纳的前几个阶段相关的超常参数可以通过培训数据来学习。要实施超常认知学习,关键点在于选择用于歧视方法的功能调节器和用于巴伊斯人学习的先前分配方法。在过去的十年左右,由于对深度学习的深入研究,重点已经放在了分析技术上。但是,贝伊斯模型的回溯源式方法正在形成对相关深层神经网络设计的新光度,这也与拜伊斯人的模型建立牢固的链接,并激励了通过深层智能网络进行新的路径,在不精确的预变变变变变。在前两个阶段,例如Bay的变变变变数据中进行。