The key insights of this work are:
Distortion rectification can be cast as a problem of learning an ordinal distortion from a single distorted image. The ordinal distortion indicates the distortion levels of a series of pixels, which extend outward from the principal point.
The ordinal distortion is more explicit to image features and homogeneous in representation compared to the traditional distortion parameters. This enables neural networks to gain sufficient distortion perception and achieve faster convergence without extra feature guidance or pixel-wise supervision.
The authors design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution. A distortion-aware perception layer is exploited to boost the feature extraction of different degrees of distortion.
The estimated ordinal distortion can be easily converted to the distortion parameters for various camera models, enabling efficient and accurate distortion rectification.
Extensive experiments demonstrate that the proposed approach outperforms state-of-the-art methods by a significant margin, with approximately 23% improvement on the quantitative evaluation while using fewer input images.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Kang Liao,Ch... at arxiv.org 04-30-2024
https://arxiv.org/pdf/2007.10689.pdfDeeper Inquiries