A novel Res-U2Net deep learning architecture is proposed for efficient phase retrieval from intensity measurements, enabling high-quality 2D and 3D image reconstruction.
This study proposes a novel unsupervised multi-exposure image fusion architecture that effectively leverages latent information in source images, optimizes the fusion process using attention mechanisms, and enhances the color and saturation of the final fused image.
The core message of this work is to uncover a quantitative connection between denoising and compression, and use it to design a conceptual framework for building white-box (mathematically interpretable) transformer-like deep neural networks which can learn using unsupervised pretext tasks, such as masked autoencoding.
The authors propose the Spiking-UNet, an efficient integration of Spiking Neural Networks (SNNs) and the U-Net architecture for image segmentation and denoising tasks. They introduce multi-threshold spiking neurons and a connection-wise normalization method to address the challenges of information propagation and training in deep SNNs. The Spiking-UNet achieves comparable performance to traditional U-Net models while significantly reducing inference time.