The author proposes MoST, a motion style transformer that effectively disentangles style from content and generates high-quality motions with transferred style. The approach involves a new architecture and loss functions to outperform existing methods.
Proposing MoST, a novel motion style transformer that effectively disentangles style from content in action sequences, outperforming existing methods.
Proposing MoST, a novel motion style transformer that effectively disentangles style from content, outperforming existing methods in generating high-quality motion sequences with transferred style.
The proposed SMCD framework can learn motion style features more comprehensively by considering style motion as a condition, and the introduced Motion Style Mamba (MSM) module effectively captures the temporal information of motion sequences, enabling the generation of more realistic and natural motion style transfer.