This paper presents Key2Mesh, a model that efficiently estimates 3D human body meshes from 2D keypoint inputs by leveraging large-scale unpaired motion capture (MoCap) data and an adversarial domain adaptation technique to bridge the gap between MoCap and visual domains.
LPSNet is the first end-to-end framework that can directly recover 3D human poses and shapes from lensless imaging measurements, without the need for intermediate image reconstruction.