toplogo
Bejelentkezés

Self-Calibration of Eye Tracking in VR Headsets Using Fixation-Based Optimization


Alapfogalmak
The proposed method can accurately calibrate eye tracking in VR headsets by optimizing the offset between the optical and visual axes using the dispersion of points of regard during fixations, without requiring explicit user calibration or scene images.
Kivonat
The study proposes a novel self-calibration method for eye tracking in virtual reality (VR) headsets. The key aspects are: The method is based on the assumption that during visual fixation, the points of regard (PoRs) from different viewpoints are distributed within a small area on an object surface, even when the user's head is moving. Fixations are first detected from the time-series data of uncalibrated gaze directions using an extension of the I-VDT (velocity and dispersion threshold identification) algorithm to a three-dimensional (3D) scene. The calibration parameters are then optimized by minimizing the sum of a dispersion metric of the PoRs during fixations. This allows identifying the optimal calibration parameters representing the user-dependent offset from the optical axis to the visual axis without explicit user calibration, image processing, or marker-substitute objects. The proposed method was evaluated with gaze data from 18 participants walking in two VR environments with many occlusions. It achieved an accuracy of 2.1°, which was significantly better than the average offset of 4-7°. The method is the first self-calibration approach applicable to 3D environments without requiring scene images.
Statisztikák
The proposed method achieved an accuracy of 2.1° in calibrating the eye tracking in VR environments. The average offset between the optical and visual axes is typically 4-7°, with a maximum difference of 19.57° horizontally and 16.25° vertically.
Idézetek
"The proposed method can potentially identify the optimal calibration parameters representing the user-dependent offset from the optical axis to the visual axis without explicit user calibration, image processing, or marker-substitute objects." "For the gaze data of 18 participants walking in two VR environments with many occlusions, the proposed method achieved an accuracy of 2.1°, which was significantly lower than the average offset."

Mélyebb kérdések

How could the proposed self-calibration method be extended to work in real-world environments beyond just VR?

The proposed self-calibration method could be extended to work in real-world environments beyond VR by adapting the algorithm to accommodate the complexities and variations present in real-world settings. One way to achieve this is by incorporating additional sensors or data sources to enhance the accuracy and robustness of the calibration process. For example, integrating inertial sensors or external cameras to track head movements and eye positions in real-time could provide more comprehensive data for calibration. Furthermore, the algorithm could be optimized to handle different lighting conditions, varying distances to objects, and occlusions commonly encountered in real-world scenarios. By refining the calibration parameters based on a wider range of environmental factors, the method can be adapted to suit diverse real-world applications such as automotive safety systems, human-computer interaction, and medical diagnostics.

What are the potential limitations or failure cases of the fixation-based optimization approach, and how could they be addressed?

One potential limitation of the fixation-based optimization approach is the assumption that fixations are always accurately detected and that the dispersion of points of regard (PoRs) during fixations is consistent. However, in practice, factors such as rapid eye movements, blinks, or distractions can lead to inaccuracies in fixation detection, resulting in suboptimal calibration parameters. To address these limitations, the algorithm could be enhanced with additional validation checks or filters to ensure the reliability of detected fixations. For example, incorporating machine learning algorithms to differentiate between true fixations and other eye movements could improve the accuracy of the calibration process. Additionally, implementing real-time feedback mechanisms to adjust the calibration parameters based on the quality of fixation detection could help mitigate errors and improve overall performance.

What other eye movement characteristics beyond fixations could be leveraged to further improve the self-calibration accuracy in challenging environments?

Beyond fixations, other eye movement characteristics such as saccades, smooth pursuits, and microsaccades could be leveraged to enhance self-calibration accuracy in challenging environments. By analyzing the patterns and dynamics of these eye movements, the algorithm can gain additional insights into the user's gaze behavior and refine the calibration parameters accordingly. For instance, incorporating information from saccades, which are rapid eye movements between fixations, can help in determining the transition between different points of regard and improving the overall calibration accuracy. Similarly, tracking smooth pursuits, which are slow eye movements that track moving objects, can provide valuable data for adjusting calibration parameters in dynamic environments. By integrating a comprehensive analysis of various eye movement characteristics, the self-calibration method can adapt to a wider range of scenarios and user behaviors, leading to more precise and reliable gaze tracking in challenging real-world environments.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star