The increasing deployment of Machine Learning (ML) models in sensitive domains motivates the need for robust, practical privacy assessment tools. PrivacyGuard is a comprehensive tool for empirical differential privacy (DP) analysis, designed to evaluate privacy risks in ML models through state-of-the-art inference attacks and advanced privacy measurement techniques. To this end, PrivacyGuard implements a diverse suite of privacy attack-- including membership inference , extraction, and reconstruction attacks -- enabling both off-the-shelf and highly configurable privacy analyses. Its modular architecture allows for the seamless integration of new attacks, and privacy metrics, supporting rapid adaptation to emerging research advances. We make PrivacyGuard available at https://github.com/facebookresearch/PrivacyGuard.
翻译:随着机器学习(ML)模型在敏感领域的日益广泛应用,对稳健、实用的隐私评估工具的需求日益迫切。PrivacyGuard 是一款用于经验差分隐私(DP)分析的综合工具,旨在通过最先进的推理攻击与先进的隐私度量技术,评估机器学习模型中的隐私风险。为此,PrivacyGuard 实现了一套多样化的隐私攻击方法——包括成员推理、数据提取和重构攻击——支持开箱即用和高度可配置的隐私分析。其模块化架构允许无缝集成新的攻击方法和隐私度量指标,支持快速适应新兴的研究进展。我们在 https://github.com/facebookresearch/PrivacyGuard 公开提供了 PrivacyGuard。