We investigate the capabilities of transfer learning in the area of structural health monitoring. In particular, we are interested in damage detection for concrete structures. Typical image datasets for such problems are relatively small, calling for the transfer of learned representation from a related large-scale dataset. Past efforts of damage detection using images have mainly considered cross-domain transfer learning approaches using pre-trained IMAGENET models that are subsequently fine-tuned for the target task. However, there are rising concerns about the generalizability of IMAGENET representations for specific target domains, such as for visual inspection and medical imaging. We, therefore, evaluate a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges. We perform comprehensive comparisons to study the impact of cross-domain and in-domain transfer, with various initialization strategies, using six publicly available visual inspection datasets. The pre-trained models are also evaluated for their ability to cope with the extremely low-data regime. We show that the combination of cross-domain and in-domain transfer persistently shows superior performance specially with tiny datasets. Likewise, we also provide visual explanations of predictive models to enable algorithmic transparency and provide insights to experts about the intrinsic decision logic of typically black-box deep models.
翻译:我们特别对结构健康监测领域的转移学习能力进行了调查。我们特别对具体结构的损害探测感兴趣。这些问题的典型图像数据集相对较少,要求从相关的大规模数据集中转让学到的代言人;过去利用图像探测损害的努力主要考虑使用经过事先训练的IMAGENET模型的跨域转移学习方法,这些模型随后对目标任务进行了微调;然而,人们对IMAGENET在特定目标领域,例如视觉检查和医学成像等领域的通用性表示越来越关切。因此,我们评估了用于在桥梁中探测损害的部内和跨域转让学习战略的组合。我们进行了综合比较,以研究跨域和部内转移的影响,并采用各种初始化战略,使用6个公开提供的视觉检查数据集。还评估了经过预先训练的模型,以确定它们是否有能力应对极低的数据制度。我们表明,跨域和持续转移的组合表明特别与小数据集的优异性表现。同样,我们还提供了关于深层次预测模型的直观性解释。我们还提供了典型的逻辑分析模型,以便能够进行测算。