As extended reality (XR) systems become increasingly immersive and sensor-rich, they enable the collection of behavioral signals such as eye and body telemetry. These signals support personalized and responsive experiences and may also contain unique patterns that can be linked back to individuals. However, privacy mechanisms that naively pair unimodal mechanisms (e.g., independently apply privacy mechanisms for eye and body privatization) are often ineffective at preventing re-identification in practice. In this work, we systematically evaluate real-time privacy mechanisms for XR, both individually and in pair, across eye and body modalities. We assess privacy through re-identification rates and evaluate utility using numerical performance thresholds derived from existing literature to ensure real-time interaction requirements are met. We evaluated four eye and ten body mechanisms across multiple datasets, comprising up to 407 participants. Our results show that when carefully paired, multimodal mechanisms reduce re-identification rate from 80.3% to 26.3% in casual XR applications (e.g., VRChat and Job Simulator) and from 84.8% to 26.1% in competitive XR applications (e.g., Beat Saber and Synth Riders), all while maintaining acceptable performance based on established thresholds. To facilitate adoption, we additionally release XR Privacy SDK, an open-source toolkit enabling developers to integrate the privacy mechanisms into XR applications for real-time use. These findings underscore the potential of modality-specific and context-aware privacy strategies for protecting behavioral data in XR environments.
翻译:暂无翻译