A key challenge for robotic systems is to figure out the behavior of another agent. The capability to draw correct inferences is crucial to derive human behavior from examples. Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally (observational evidence). For this reason, robots that rely on inferences that are correlational risk a biased interpretation of the evidence. We propose equipping robots with the necessary tools to conduct observational studies on people. Specifically, we propose and explore the feasibility of structural causal models with non-parametric estimators to derive empirical estimates on hand behavior in the context of object manipulation in a virtual kitchen scenario. In particular, we focus on inferences under (the weaker) conditions of partial confounding (the model covering only some factors) and confront estimators with hundreds of samples instead of the typical order of thousands. Studying these conditions explores the boundaries of the approach and its viability. Despite the challenging conditions, the estimates inferred from the validation data are correct. Moreover, these estimates are stable against three refutation strategies where four estimators are in agreement. Furthermore, the causal quantity for two individuals reveals the sensibility of the approach to detect positive and negative effects. The validity, stability and explainability of the approach are encouraging and serve as the foundation for further research.
翻译:机器人系统的一个关键挑战是找出另一个代理人的行为。 正确推断的能力对于从实例中得出人类行为至关重要。 处理正确的推断在( 观察性证据 ) 不实验性控制( 观察性证据 ) 的情况下尤其具有挑战性。 因此, 依赖相关风险的推断的机器人对证据有偏差的解释。 我们提议为机器人配备必要的工具,对人进行观察研究。 具体地说, 我们提议并探索结构性因果模型的可行性, 使用非参数估计器, 在虚拟厨房情景中从物体操作中得出手头行为的经验性估计。 特别是, 我们注重在( 较弱的)条件下部分混合( 模型仅涵盖某些因素) 的推断, 并用数百个样本对抗估计者, 而不是以典型的数千个序列来解释证据。 我们研究这些条件探索了方法的界限及其可行性。 尽管存在挑战性条件, 我们从验证数据中推断的估计数是正确的。 此外, 这些估计是稳定的, 与三种可辩驳性战略相比, 四个估计性分析者为确定性的基础, 并且进一步解释正确性方法的因果关系数量。