Proposing a self-supervised pre-training framework using masked autoencoders (PAME) to enhance point cloud quality assessment without reference.
CrIBo introduces a novel method for self-supervised learning tailored to enhance dense visual representation learning.
LeOCLR introduces a new approach to contrastive instance discrimination, improving representation learning by leveraging original images.
Diffusion-driven self-supervised network for multi-object shape reconstruction and categorical pose estimation.
FroSSL achieves faster convergence and competitive accuracies in self-supervised learning by minimizing covariance Frobenius norms.
Balancing stability and plasticity is crucial for effective continual self-supervised learning.
CrIBo introduces a novel method for self-supervised learning tailored to enhance dense visual representation learning.
Proposing a self-supervised method to jointly learn 3D motion and depth from monocular videos, benefiting both depth and 3D motion estimation.
BiSSL, a novel training framework leveraging bilevel optimization, enhances the alignment between self-supervised pre-training and downstream fine-tuning, leading to improved performance in downstream tasks.
This research paper introduces FALCON, a novel approach to non-contrastive self-supervised learning that guarantees the avoidance of common failure modes like representation, dimensional, cluster, and intracluster collapses, leading to improved generalization in downstream tasks.