Core Concepts
This work introduces a novel memory-scalable and efficient functional map learning pipeline that avoids storing large dense matrices in memory. It also presents a differentiable version of the ZoomOut algorithm for map refinement, which can be used during training to provide self-supervision.
Abstract
The content discusses a novel approach to functional map learning for non-rigid shape matching. The key contributions are:
A memory-scalable and efficient implementation of dense pointwise maps, which avoids storing large dense matrices in memory. This is achieved by exploiting the specific structure of functional maps and using GPU acceleration.
A differentiable version of the ZoomOut algorithm for map refinement, which can be used during training to provide self-supervision by enforcing consistency between the initial and refined functional maps.
A single-branch network architecture for functional map learning that does not require differentiating through a linear system solver, unlike previous methods. This is enabled by the proposed self-supervision approach.
The authors first provide background on the functional map framework and recent developments in deep functional map learning. They then introduce their scalable dense maps and differentiable ZoomOut algorithm. Finally, they present their overall pipeline and demonstrate its efficiency, scalability, and performance on various shape matching benchmarks.
Stats
The content does not provide any specific numerical data or metrics to support the key claims. However, it does present qualitative results and comparisons to prior work.