Transfer learning has become an essential technique to exploit information from the source domain to boost performance of the target task. Despite the prevalence in high-dimensional data, heterogeneity and/or heavy tails tend to be discounted in current transfer learning approaches and thus may undermine the resulting performance. We propose a transfer learning procedure in the framework of high-dimensional quantile regression models to accommodate the heterogeneity and heavy tails in the source and target domains. We establish error bounds of the transfer learning estimator based on delicately selected transferable source domains, showing that lower error bounds can be achieved for critical selection criterion and larger sample size of source tasks. We further propose valid confidence interval and hypothesis test procedures for individual component of quantile regression coefficients by advocating a one-step debiased estimator of transfer learning estimator wherein the consistent variance estimation is proposed via the technique of transfer learning again. Simulation results demonstrate that the proposed method exhibits some favorable performances.
翻译:转让学习已成为利用源域信息促进目标任务绩效的一项必要技术,尽管高维数据普遍存在,但在目前的转让学习方法中,异质和/或重尾巴往往被打折扣,从而可能破坏所产生的绩效。我们提议在高维四分位回归模型框架内采用转让学习程序,以适应源域和目标域的异质和重尾巴。我们根据微妙选择的可转让源域,确定了转移学习估计值的错误界限,表明关键选择标准和较大源任务样本的大小可以达到较低的误差界限。我们进一步建议对微分回归系数的单个组成部分提出有效的信任间隔和假设测试程序,方法是提倡采用一步脱偏差的参数估计转移学习估计值,通过转移学习技术再次提出一致的差异估计值。模拟结果显示,拟议的方法显示某些优异性。