RetinaRegNet, a versatile model, achieves state-of-the-art performance on various retinal image registration tasks through innovative techniques, including diffusion feature extraction, inverse consistency constraint, and a transformation-based outlier detector.
The authors propose to improve the ConKeD framework for retinal image registration by testing multiple contrastive learning loss functions, including SupCon, MP-InfoNCE, MP-N-Pair, and FastAP losses. They demonstrate state-of-the-art performance across multiple datasets, including the standard FIRE benchmark as well as two new datasets (LongDRS and DeepDRiD) with diverse characteristics.