A useful capability is that of classifying some agent's behavior using data from a sequence, or trace, of sensor measurements. The sensor selection problem involves choosing a subset of available sensors to ensure that, when generated, observation traces will contain enough information to determine whether the agent's activities match some pattern. In generalizing prior work, this paper studies a formulation in which multiple behavioral itineraries may be supplied, with sensors selected to distinguish between behaviors. This allows one to pose fine grained questions, e.g., to position the agent's activity on a spectrum. In addition, with multiple itineraries, one can also ask about choices of sensors where some behavior is always plausibly concealed by (or mistaken for, or conflated with) another. Using sensor ambiguity to limit the acquisition of knowledge is a strong privacy guarantee, and one which some earlier work has examined. By concretely formulating privacy requirements for sensor selection, this paper connects both lines of work: privacy -- where there is a bound from above, and behavior verification -- where sensors are bounded from below. We examine the worst case computational complexity that results from both types of bounds, proving that upper bounds are more challenging under standard computational complexity assumptions. The problem is intractable in general, but we give a novel approach to solving this problem that can exploit interrelationships between constraints, and we see opportunities for a few optimizations. Case studies are presented to demonstrate the usefulness and scalability of our proposed solution, and to assess the impact of the optimizations.
翻译:暂无翻译