Modern variable selection procedures make use of penalization methods to execute simultaneous model selection and estimation. A popular method is the LASSO (least absolute shrinkage and selection operator), the use of which requires selecting the value of a tuning parameter. This parameter is typically tuned by minimizing the cross-validation error or Bayesian information criterion (BIC) but this can be computationally intensive as it involves fitting an array of different models and selecting the best one. In contrast with this standard approach, we have developed a procedure based on the so-called "smooth IC" (SIC) in which the tuning parameter is automatically selected in one step. We also extend this model selection procedure to the distributional regression framework, which is more flexible than classical regression modelling. Distributional regression, also known as multiparameter regression (MPR), introduces flexibility by taking account of the effect of covariates through multiple distributional parameters simultaneously, e.g., mean and variance. These models are useful in the context of normal linear regression when the process under study exhibits heteroscedastic behaviour. Reformulating the distributional regression estimation problem in terms of penalized likelihood enables us to take advantage of the close relationship between model selection criteria and penalization. Utilizing the SIC is computationally advantageous, as it obviates the issue of having to choose multiple tuning parameters.
翻译:现代可变选择程序使用惩罚性方法来同时进行模型选择和估计。一种流行的方法是LASSO(最低绝对缩缩和选择操作员),使用该方法需要选择调制参数的价值。该参数通常通过尽量减少交叉校准错误或巴伊西亚信息标准来调整,但可以计算得非常密集,因为它同时需要安装一系列不同的模型和选择最佳的模型。与这一标准方法不同,我们根据所谓的“mooth IC”(SIC)(SIC) (SIC) (SIC) (SIC) (SIC) (SIC) (SIC)) (SIC) 制定了一个程序,在这个程序中,调控调参数将自动一步地选择。我们还将这一模式选择程序扩大到分布回归框架,这个框架比典型的回归模型模型模型更灵活。分布回归(MPR)也称为多参数回归(MPR),通过同时使用多种分配参数(例如平均和差异的参数)带来灵活性。这些模型在正常的线性回归背景下有用,因为正在研究的进程中自动选定调控重的行为。改革分配回归性模型估计问题,在稳定性参数方面使我们能够利用多种选择关系。