This paper introduces EMatch, a novel framework that unifies event-based optical flow estimation and stereo matching as a dense correspondence matching problem, enabling both tasks to be solved within a single model.
EROAM is a novel event-based system that leverages a spherical event representation and a novel Event Spherical Iterative Closest Point (ES-ICP) algorithm to achieve real-time, accurate camera rotation estimation and high-quality panoramic reconstruction, outperforming existing methods in accuracy, robustness, and computational efficiency.
This paper introduces a novel method for quadrotor obstacle avoidance using an event camera, demonstrating superior performance at high speeds compared to traditional vision-based methods and enabling flight in low-light conditions.
This research paper introduces EVSNet, a novel framework that leverages event cameras to improve the accuracy and temporal consistency of video semantic segmentation in low-light conditions.
This research paper introduces a novel method for estimating relative distances between objects using an event camera, inspired by the biological mechanism of gaze stabilization.
This paper introduces BlinkFlow, a novel simulator and large-scale dataset designed to advance event-based optical flow estimation by addressing the limitations of existing datasets with their biased and limited data.
Event cameras, with their unique ability to capture changes in brightness asynchronously, offer significant advantages for improving odometry in robotics by overcoming limitations of traditional sensors like frame-based cameras and LiDAR, especially in challenging environments.
This research paper introduces a novel method for estimating camera motion and scene geometry from event camera data using event-based normal flow, proposing both linear and continuous-time solvers that outperform existing methods in accuracy and efficiency, particularly in handling sudden motion changes.
This research paper introduces novel, efficient methods for recognizing oscillatory actions in wildlife videos using event cameras and Fourier analysis, achieving comparable accuracy to deep learning models with significantly fewer parameters.