This paper provides a comprehensive survey of deep learning techniques for event-based vision, covering event representations, quality enhancement, image/video reconstruction and restoration, and scene understanding and 3D vision. It also conducts benchmark experiments and discusses challenges and future research directions in this field.
This paper introduces state-space models (SSMs) as a novel approach to address two key challenges in event-based vision: (1) model performance degradation when operating at temporal frequencies different from training, and (2) slow training efficiency. The proposed SSM-based models demonstrate superior generalization to higher frequencies and achieve a 33% increase in training speed compared to existing recurrent and transformer-based methods.
Dynamic vision sensors (DVS) suffer from significant background activity (BA) noise, which is challenging to distinguish from the original sensor signal. This work presents a novel technique based on detrended fluctuation analysis (DFA) to objectively quantify and characterize the BA noise in DVS data without the need for ground truth.