|
|
Paper |
Code |
Demo |
Colab |
|
We present an approach that, given any video in the wild, can jointly reconstruct the underlying humans in 3D, and track them over time. Here, we show the input video on the left, and the reconstructed humans on the right, wihtout any temporal smoothness, with colors indicating track identities over time. Our approach works reliably on usual and unusual poses, under poor visibility, extreme truncations and extreme occlusions. |
Left: HMR 2.0 is a fully "transformerized" version of a network for Human Mesh Recovery, containing a ViT and a cross-attention-based transformer decoder. Right: We use HMR 2.0 as the backbone of our 4DHumans system, that builds on PHALP, to jointly reconstruct and track humans in 4D. |
We present comparisons to previous state-of-the-art approaches (PyMAF-X and PARE) for lifting humans to 3D. HMR 2.0 reconstructions are temporally stable, and are better aligned to the input video. The baselines have a lot of jitter and sometimes even fail completely on hard poses or unusual viewpoints. |
|
Here, we visualize the reocnstructed mesh from a novel view, in addition to the camera view. HMR 2.0 reconstructions are plausible and temporally stable, even when observed from a novel view, despite it being per-frame 3D reconstruction without temporal smoothness. |
Citation |
AcknowledgementsThis research was supported by the DARPA Machine Common Sense program, ONR MURI, as well as BAIR/BDD sponsors. We thank members of the BAIR community for helpful discussions. We also thank StabilityAI for their generous compute grant. This webpage template was borrowed from some colorful folks. Music credits: SLAHMR. Icons: Flaticon. |