A novel deep learning method that can generate variations of contact-rich two-person interactions with different body sizes and proportions while retaining the key geometric and topological relations between the two bodies.
This work presents the Large Motion Model (LMM), the first generalist multi-modal motion generation model that can perform multiple motion generation tasks simultaneously and achieve competitive performance across a wide range of benchmarks.