This paper introduces a novel neuromorphic dataset and methodology for efficient eye tracking using event data captured by a Dynamic Vision Sensor (DVS). The key contributions are:
Ini-30 Dataset: The first event-based eye tracking dataset collected with two DVS cameras mounted on a glass frame, capturing natural eye movements in unconstrained settings.
Retina Model: A lightweight Spiking Neural Network (SNN) architecture based on Integrate-and-Fire (IAF) neurons, featuring only 64k parameters. Retina achieves a pupil tracking error of 3.24 pixels on a 64x64 DVS input, outperforming the state-of-the-art event-based method 3ET.
Neuromorphic Hardware Deployment: Retina is deployed on the Speck neuromorphic processor, demonstrating end-to-end power consumption between 2.89-4.8 mW and latency of 5.57-8.01 ms, making it suitable for energy-efficient, low-latency eye tracking applications.
The authors show that Retina's performance is superior to 3ET, while being 35 times more computationally efficient. This work paves the way for further development of neuromorphic solutions for real-world, event-based eye tracking.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問