Whereas deep neural network (DNN) is increasingly applied to choice analysis, it is challenging to reconcile domain-specific behavioral knowledge with generic-purpose DNN, to improve DNN's interpretability and predictive power, and to identify effective regularization methods for specific tasks. This study designs a particular DNN architecture with alternative-specific utility functions (ASU-DNN) by using prior behavioral knowledge. Unlike a fully connected DNN (F-DNN), which computes the utility value of an alternative k by using the attributes of all the alternatives, ASU-DNN computes it by using only k's own attributes. Theoretically, ASU-DNN can dramatically reduce the estimation error of F-DNN because of its lighter architecture and sparser connectivity. Empirically, ASU-DNN has 2-3% higher prediction accuracy than F-DNN over the whole hyperparameter space in a private dataset that we collected in Singapore and a public dataset in R mlogit package. The alternative-specific connectivity constraint, as a domain-knowledge-based regularization method, is more effective than the most popular generic-purpose explicit and implicit regularization methods and architectural hyperparameters. ASU-DNN is also more interpretable because it provides a more regular substitution pattern of travel mode choices than F-DNN does. The comparison between ASU-DNN and F-DNN can also aid in testing the behavioral knowledge. Our results reveal that individuals are more likely to compute utility by using an alternative's own attributes, supporting the long-standing practice in choice modeling. Overall, this study demonstrates that prior behavioral knowledge could be used to guide the architecture design of DNN, to function as an effective domain-knowledge-based regularization method, and to improve both the interpretability and predictive power of DNN in choice analysis.
翻译:深度神经网络( DNN) 越来越多地用于选择分析, 而在深度神经网络( DNN) 日益被应用到选择分析中, 调和特定领域的行为知识与通用目的 DNN, 改善 DNN 的可解释性和预测能力, 并为具体任务确定有效的规范化方法。 这项研究设计了一个特殊的 DNN 架构, 使用先前的行为知识, 具有替代特定功能( ASU- DNN ) 。 不同于一个完全连接的 DNN( F- DNN), 它使用所有替代软件的属性来计算替代 k 的实用价值, ASU- DNN 以仅使用 k 本身的属性来进行编译。 理论上, ASU- DNNN 的估算错误性错误性, 因为它较轻的架构和稀疏的连接性连接性连接性。 ASU- NNF 的常规和隐含性规则性规则性规则性规则性, 也可以用一种常规的OD 模式来演示。