VIP内容

许多视频分类应用需要访问用户的个人数据,从而对用户的隐私构成入侵性安全风险。我们提出了一种基于卷积神经网络的单帧方法视频分类的隐私保护实现,该实现允许一方从视频中推断出标签,而无需视频所有者以非加密的方式向其他实体披露他们的视频。类似地,我们的方法消除了分类器所有者以明文形式向外部实体透露其模型参数的要求。为此,我们将现有的用于私有图像分类的安全多方计算(MPC)协议与用于无关单帧选择和跨帧安全标签聚合的新MPC协议相结合。结果是一个端到端的隐私保护视频分类流程。我们在一个私人人类情感识别的应用评估了提出的解决方案。各种安全设置的结果,包括计算各方的诚实和不诚实的大多数配置,以及被动型和主动型对手,表明视频可以以最先进的精确度分类,而且不会泄露敏感用户信息。

https://www.zhuanzhi.ai/paper/7955a3eed16d1e0663383e2abe84594f

成为VIP会员查看完整内容
0
5

最新内容

Dance experts often view dance as a hierarchy of information, spanning low-level (raw images, image sequences), mid-levels (human poses and bodypart movements), and high-level (dance genre). We propose a Hierarchical Dance Video Recognition framework (HDVR). HDVR estimates 2D pose sequences, tracks dancers, and then simultaneously estimates corresponding 3D poses and 3D-to-2D imaging parameters, without requiring ground truth for 3D poses. Unlike most methods that work on a single person, our tracking works on multiple dancers, under occlusions. From the estimated 3D pose sequence, HDVR extracts body part movements, and therefrom dance genre. The resulting hierarchical dance representation is explainable to experts. To overcome noise and interframe correspondence ambiguities, we enforce spatial and temporal motion smoothness and photometric continuity over time. We use an LSTM network to extract 3D movement subsequences from which we recognize the dance genre. For experiments, we have identified 154 movement types, of 16 body parts, and assembled a new University of Illinois Dance (UID) Dataset, containing 1143 video clips of 9 genres covering 30 hours, annotated with movement and genre labels. Our experimental results demonstrate that our algorithms outperform the state-of-the-art 3D pose estimation methods, which also enhances our dance recognition performance.

0
0
下载
预览

最新论文

Dance experts often view dance as a hierarchy of information, spanning low-level (raw images, image sequences), mid-levels (human poses and bodypart movements), and high-level (dance genre). We propose a Hierarchical Dance Video Recognition framework (HDVR). HDVR estimates 2D pose sequences, tracks dancers, and then simultaneously estimates corresponding 3D poses and 3D-to-2D imaging parameters, without requiring ground truth for 3D poses. Unlike most methods that work on a single person, our tracking works on multiple dancers, under occlusions. From the estimated 3D pose sequence, HDVR extracts body part movements, and therefrom dance genre. The resulting hierarchical dance representation is explainable to experts. To overcome noise and interframe correspondence ambiguities, we enforce spatial and temporal motion smoothness and photometric continuity over time. We use an LSTM network to extract 3D movement subsequences from which we recognize the dance genre. For experiments, we have identified 154 movement types, of 16 body parts, and assembled a new University of Illinois Dance (UID) Dataset, containing 1143 video clips of 9 genres covering 30 hours, annotated with movement and genre labels. Our experimental results demonstrate that our algorithms outperform the state-of-the-art 3D pose estimation methods, which also enhances our dance recognition performance.

0
0
下载
预览
Top