Accurately recognizing health-related conditions from wearable data is crucial for improved healthcare outcomes. To improve the recognition accuracy, various approaches have focused on how to effectively fuse information from multiple sensors. Fusing multiple sensors is a common scenario in many applications, but may not always be feasible in real-world scenarios. For example, although combining bio-signals from multiple sensors (i.e., a chest pad sensor and a wrist wearable sensor) has been proved effective for improved performance, wearing multiple devices might be impractical in the free-living context. To solve the challenges, we propose an effective more to less (M2L) learning framework to improve testing performance with reduced sensors through leveraging the complementary information of multiple modalities during training. More specifically, different sensors may carry different but complementary information, and our model is designed to enforce collaborations among different modalities, where positive knowledge transfer is encouraged and negative knowledge transfer is suppressed, so that better representation is learned for individual modalities. Our experimental results show that our framework achieves comparable performance when compared with the full modalities. Our code and results will be available at https://github.com/compwell-org/More2Less.git.
翻译:精确地认识到从可磨损的数据中与健康有关的条件对于改善保健成果至关重要。为了提高认识的准确性,各种办法侧重于如何有效地融合来自多个传感器的信息。在许多应用中,使用多个传感器是一种常见的情景,但在现实世界情景中可能并不总是可行。例如,虽然将多个传感器的生物信号(即胸垫传感器和手腕可磨损传感器)结合起来已证明对改进性能十分有效,但穿戴多种装置在自由生活的情况下可能是不切实际的。为了解决挑战,我们建议了一个更有效或更少的(M2L)学习框架,通过在培训期间利用多种模式的补充信息,提高低传感器的测试性能。更具体地说,不同传感器可能携带不同但互补的信息,我们的模式旨在在不同模式之间加强合作,鼓励积极的知识转让,抑制负面的知识转让,从而更好地学习个人模式。我们的实验结果表明,与完整模式相比,我们的框架可以实现可比的性能。我们的代码和结果将在 https://github.com/comcomcomcompwell-org/Mo2Lessgistrat上公布。