In toxicology research, experiments are often conducted to determine the effect of toxicant exposure on the behavior of mice, where mice are randomized to receive the toxicant or not. In particular, in fixed interval experiments, one provides a mouse reinforcers (e.g., a food pellet), contingent upon some action taken by the mouse (e.g., a press of a lever), but the reinforcers are only provided after fixed time intervals. Often, to analyze fixed interval experiments, one specifies and then estimates the conditional state-action distribution (e.g., using an ANOVA). This existing approach, which in the reinforcement learning framework would be called modeling the mouse's "behavioral policy," is sensitive to misspecification. It is likely that any model for the behavioral policy is misspecified; a mapping from a mouse's exposure to their actions can be highly complex. In this work, we avoid specifying the behavioral policy by instead learning the mouse's reward function. Specifying a reward function is as challenging as specifying a behavioral policy, but we propose a novel approach that incorporates knowledge of the optimal behavior, which is often known to the experimenter, to avoid specifying the reward function itself. In particular, we define the reward as a divergence of the mouse's actions from optimality, where the representations of the action and optimality can be arbitrarily complex. The parameters of the reward function then serve as a measure of the mouse's tolerance for divergence from optimality, which is a novel summary of the impact of the exposure. The parameter itself is scalar, and the proposed objective function is differentiable, allowing us to benefit from typical results on consistency of parametric estimators while making very few assumptions.
翻译:暂无翻译