We investigate the capabilities of transfer learning in the area of structural health monitoring. In particular, we are interested in damage detection for concrete structures. Typical image datasets for such problems are relatively small, calling for the transfer of learned representation from a related large-scale dataset. Past efforts of damage detection using images have mainly considered cross-domain transfer learning approaches using pre-trained ImageNet models that are subsequently fine-tuned for the target task. However, there are rising concerns about the generalizability of ImageNet representations for specific target domains, such as for visual inspection and medical imaging. We, therefore, propose a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges. We perform comprehensive comparisons to study the impact of cross-domain and in-domain transfer, with various initialization strategies, using six publicly available visual inspection datasets. The pre-trained models are also evaluated for their ability to cope with the extremely low-data regime. We show that the combination of cross-domain and in-domain transfer persistently shows superior performance even with tiny datasets. Likewise, we also provide visual explanations of predictive models to enable algorithmic transparency and provide insights to experts about the intrinsic decision-logic of typically black-box deep models.
翻译:我们特别对结构健康监测领域的转移学习能力进行了调查。我们特别对具体结构的损坏探测感兴趣。这类问题的典型图像数据集规模相对较小,要求从相关的大规模数据集中转让学到的代言权。以往利用图像探测损害的努力主要考虑使用预先培训的图像网络模型进行跨部转移学习方法,这些模型随后对目标任务进行了微调。然而,人们日益关切图像网络在特定目标领域,例如视觉检查和医学成像等领域的展示是否普遍。因此,我们提议将内部和跨部传输学习战略结合起来,以便在桥梁中探测损坏。我们进行全面比较,研究跨多部和内部传输的影响,并采用各种初始化战略,使用6个公开提供的视觉检查数据集。对预先培训模型是否有能力应对极低的数据制度也进行了评估。我们表明,交叉和内部传输的组合表明即使在微小的数据集中,也持续显示优异的性表现。同样,我们还对典型的测算模型提供直观的解释性解释,以便能够使典型的测算模型具有透明性。