REACTO, a novel approach, can model the 3D shape, texture, and motion of general articulated objects from a single casual video, outperforming previous state-of-the-art methods.
LEIA learns view-invariant latent embeddings from multi-view images of an object in different articulation states, enabling the generation of novel intermediate articulation states through latent interpolation.