©PaperWeekly 原创 · 作者|张莹
单位|腾讯
mesh 表示:6890 vertices, 13776 faces
pose 控制:24 个关节点,24*3 维旋转向量
shape 控制:10 维向量
Learning 3D Human Dynamics from Video. In CVPR, 2019.
Monocular Total Capture: Posing Face, Body, and Hands in the Wild. In CVPR, 2019.
Human Mesh Recovery from Monocular Images via a Skeleton-disentangled Representation. In ICCV, 2019.
VIBE: Video Inference for Human Body Pose and Shape Estimation. In CVPR, 2020.
PoseNet3D: Learning Temporally Consistent 3D Human Pose via Knowledge Distillation. In CVPR, 2020.
Appearance Consensus Driven Self-Supervised Human Mesh Recovery. In ECCV, 2020.
4.1 单张RGB图像
360-Degree Textures of People in Clothing from a Single Image. In 3DV, 2019.
Tex2Shape: Detailed Full Human Body Geometry From a Single Image. In ICCV, 2019.
ARCH: Animatable Reconstruction of Clothed Humans. In CVPR, 2020.
3D Human Avatar Digitization from a Single Image. In VRCAI, 2019.
思路2:估计 3D pose 并 warp 到 canonical 空间中用 PIFU 估计 Occupancy;
优势:可直接驱动,生成纹理质量较高;
问题:过度依赖扫描 3D 人体真值来训练网络;需要非常准确的 Pose 估计做先验;较难处理复杂形变如长发和裙子;
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization. In ICCV, 2019.
PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization. In CVPR, 2020.
SiCloPe: Silhouette-Based Clothed People. In CVPR, 2019.
PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction. In TPAMI, 2020.
Reconstructing NBA Players. In ECCV, 2020.
带衣服人体表示:Occupancy + RGB;
思路1:训练网络提取空间点投影到图像位置的特征,并结合该点位置预测其 Occupancy 值和 RGB 值;
优势:适用于任意 pose,可建模复杂外观如长发裙子
问题:过度依赖扫描 3D 人体真值来训练网络;后期需要注册 SMPL 才能进行驱动;纹理质量并不是很高;
BodyNet: Volumetric Inference of 3D Human Body Shapes. In ECCV, 2018.
DeepHuman: 3D Human Reconstruction From a Single Image. In ICCV, 2019.
问题:需要另外估纹理;分辨率较低;过度依赖扫描 3D 人体真值来训练网络;后期需要注册 SMPL 才能进行驱动;
Deep Volumetric Video From Very Sparse Multi-View Performance Capture. In ECCV, 2018.
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization. In ICCV, 2019.
PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization. In CVPR, 2020.
问题:多视角数据较难采集,过度依赖扫描 3D 人体真值来训练网络;后期需要注册 SMPL 才能进行驱动;纹理质量并不是很高;
问题:过度依赖扫描 3D 人体真值来训练网络;后期需要注册 SMPL 才能进行驱动;纹理质量并不是很高;
Video Based Reconstruction of 3D People Models. In CVPR, 2018.
Detailed Human Avatars from Monocular Video. In 3DV, 2018.
Learning to Reconstruct People in Clothing from a Single RGB Camera. In CVPR, 2019.
Multi-Garment Net: Learning to Dress 3D People from Images. In ICCV, 2019.
问题:过度依赖扫描 3D 人体真值来训练网络;需要较准确的 Pose 估计和 human parsing 做先验;较难处理复杂形变如长发裙子
问题:依赖较准确的 pose 和 segmentation 估计;只能处理部分衣服类型;
Robust 3D Self-portraits in Seconds. In CVPR, 2020.
TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video. In ECCV, 2020.
问题:流程略复杂;纹理质量一般;
问题:无纹理;
Physics-Inspired Garment Recovery from a Single-View Image. In TOG, 2018.
思路:衣服分割+衣服特征估计(尺码,布料,褶皱)+人体 mesh 估计,材质-姿态联合优化+衣物仿真;
优势:衣服和人体参数化表示较规范;引入物理、统计、几何先验;
DeepWrinkles: Accurate and Realistic Clothing Modeling. In ECCV, 2018.
Multi-Garment Net: Learning to Dress 3D People from Images. In ICCV, 2019.
Learning-Based Animation of Clothing for Virtual Try-On. In EUROGRAPHICS, 2019.
TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style. In CVPR, 2020.
BCNet: Learning Body and Cloth Shape from A Single Image. In ECCV, 2020.
Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single Images. In ECCV, 2020.
贡献:提出 Deep Fashion3D 数据集,包括 2000 件衣服,10 种类型,标记相应点云,多视角图像,3D body pose,和 feature lines;
思路:提出基于单张图像的 3D 衣服重建,通过估计衣服类型,body pose,feature lines 对 adaptable template 进行形变;
优势:衣服类型、feature line 估计可以提供更多 deformation 先验;引入 implicit surface 重建更精细;
参考文献
[1] Occupancy Networks: Learning 3D Reconstruction in Function Space. In CVPR, 2019.
[2] DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In CVPR, 2019.
[3] SMPL: A Skinned Multi-Person Linear Model. In SIGGRAPH Asia, 2015.
[4] Expressive Body Capture: 3D Hands, Face, and Body from a Single Image. In CVPR, 2019.
[5] SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans. In Eurographics, 2020.
[6] STAR: Sparse Trained Articulated Human Body Regressor. ECCV, 2020.
[7] BLSM: A Bone-Level Skinned Model of the Human Mesh. ECCV, 2020.
[8] GHUM & GHUML: Generative 3D Human Shape and Articulated Pose Models. CVPR (Oral), 2020.
[9] 3D Human Motion Editing and Synthesis: A Survey. In CMMM, 2020.
[10] MoSh: Motion and Shape Capture from Sparse Markers. In SIGGRAPH Asia, 2014.
[11] Phase-Functioned Neural Networks for Character Control. In SIGGRAPH, 2017.
[12] Dancing to Music Neural Information Processing Systems. In NeurIPS, 2019.
[13] Robust Motion In-betweening. In SIGGRAPH, 2020.
[14] Human Motion Prediction via Spatio-Temporal Inpainting. In ICCV, 2019.
[15] DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. In SIGGRAPH 2018.
[16] RigNet: Neural Rigging for Articulated Characters. In SIGGRAPH, 2020.
[17] Skeleton-Aware Networks for Deep Motion Retargeting. In SIGGRAPH, 2020.
[18] Motion Retargetting based on Dilated Convolutions and Skeleton-specific Loss Functions. In Eurographics, 2020.
[19] Dense Pose Transfer. In ECCV, 2018.
[20] Everybody Dance Now. In ICCV, 2019.
[21] Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis. In ICCV, 2019.
[22] [Few-shot Video-to-Video Synthesis. In NeurIPS 2019.
[23] TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting. In CVPR, 2020.
更多阅读
#投 稿 通 道#
让你的论文被更多人看到
如何才能让更多的优质内容以更短路径到达读者群体,缩短读者寻找优质内容的成本呢?答案就是:你不认识的人。
总有一些你不认识的人,知道你想知道的东西。PaperWeekly 或许可以成为一座桥梁,促使不同背景、不同方向的学者和学术灵感相互碰撞,迸发出更多的可能性。
PaperWeekly 鼓励高校实验室或个人,在我们的平台上分享各类优质内容,可以是最新论文解读,也可以是学习心得或技术干货。我们的目的只有一个,让知识真正流动起来。
📝 来稿标准:
• 稿件确系个人原创作品,来稿需注明作者个人信息(姓名+学校/工作单位+学历/职位+研究方向)
• 如果文章并非首发,请在投稿时提醒并附上所有已发布链接
• PaperWeekly 默认每篇文章都是首发,均会添加“原创”标志
📬 投稿邮箱:
• 投稿邮箱:hr@paperweekly.site
• 所有文章配图,请单独在附件中发送
• 请留下即时联系方式(微信或手机),以便我们在编辑发布时和作者沟通
🔍
现在,在「知乎」也能找到我们了
进入知乎首页搜索「PaperWeekly」
点击「关注」订阅我们的专栏吧
关于PaperWeekly
PaperWeekly 是一个推荐、解读、讨论、报道人工智能前沿论文成果的学术平台。如果你研究或从事 AI 领域,欢迎在公众号后台点击「交流群」,小助手将把你带入 PaperWeekly 的交流群里。