An accelerated failure time (AFT) model assumes a log-linear relationship between failure times and a set of covariates. In contrast to other popular survival models that work on hazard functions, the effects of covariates are directly on failure times, whose interpretation is intuitive. The semiparametric AFT model that does not specify the error distribution is flexible and robust to departures from the distributional assumption. Owing to the desirable features, this class of models has been considered as a promising alternative to the popular Cox model in the analysis of censored failure time data. However, in these AFT models, a linear predictor for the mean is typically assumed. Little research has addressed the nonlinearity of predictors when modeling the mean. Deep neural networks (DNNs) have received a focal attention over the past decades and have achieved remarkable success in a variety of fields. DNNs have a number of notable advantages and have been shown to be particularly useful in addressing the nonlinearity. By taking advantage of this, we propose to apply DNNs in fitting AFT models using a Gehan-type loss, combined with a sub-sampling technique. Finite sample properties of the proposed DNN and rank based AFT model (DeepR-AFT) are investigated via an extensive stimulation study. DeepR-AFT shows a superior performance over its parametric or semiparametric counterparts when the predictor is nonlinear. For linear predictors, DeepR-AFT performs better when the dimensions of covariates are large. The proposed DeepR-AFT is illustrated using two real datasets, which demonstrates its superiority.
翻译:加速失灵时间模型( AFT) 假设失败时间和一组共差之间的日志线性关系。 与在危险功能方面起作用的其他流行生存模型不同, 共变效应的影响直接发生在失败时间, 其解释是直观的。 未具体说明错误分布的半参数 AFT 模型灵活有力, 与分布假设不同。 由于理想的特点, 这一类模型被视为在分析受审查的失败时间数据时, 流行的 Cox 模型的一种有希望的替代方法。 但是, 在这些 AFT 模型中, 通常会假设一种平均值的线性预测符。 几乎没有研究解决了在模拟平均值时预测器的不直线性能问题。 深线性能网络( DNNS) 在过去几十年中受到集中关注,在多个领域都取得了显著的成功。 DNNFS 具有一些显著的优势, 并且已经表明在应对非线性性关系时特别有用。 我们提议, 利用这一优势, 将DNFTS 应用一个深线性模型来安装 AFT模型, 与一个深度型模型, 并结合一个深度A 高级的高级的高级FTA 演示研究 显示一个不甚高的FT 。 。