This paper presents a novel holistic deep learning framework that simultaneously addresses the challenges of vulnerability to input perturbations, overparametrization, and performance instability from different train-validation splits. The proposed framework holistically improves accuracy, robustness, sparsity, and stability over standard deep learning models, as demonstrated by extensive experiments on both tabular and image data sets. The results are further validated by ablation experiments and SHAP value analysis, which reveal the interactions and trade-offs between the different evaluation metrics. To support practitioners applying our framework, we provide a prescriptive approach that offers recommendations for selecting an appropriate training loss function based on their specific objectives. All the code to reproduce the results can be found at https://github.com/kimvc7/HDL.
翻译:本文介绍了一个新的整体深层学习框架,它既解决了因不同培训-校准分开而容易受到投入干扰、过度平衡和工作不稳定等挑战,又应对了不同培训-校准分开的挑战。拟议框架全面提高了标准深层学习模型的准确性、稳健性、宽度和稳定性,如在表格和图像数据集上的广泛实验所证明的那样。结果还得到了模拟实验和SHAP价值分析的进一步验证,这些实验和SHAP价值分析揭示了不同评价指标之间的相互作用和取舍。为了支持应用我们框架的从业人员,我们提供了一种规范性方法,根据他们的具体目的为选择适当的培训损失功能提出建议。所有生成结果的代码都可在https://github.com/kimvc7/HDL查阅。</s>