Concepts de base
PrivatEyes introduces a novel approach combining federated learning and secure multi-party computation for privacy-enhancing gaze estimation.
Résumé
PrivatEyes addresses privacy risks in gaze estimation by utilizing federated learning and secure multi-party computation. It ensures data privacy while maintaining accuracy and scalability across multiple datasets. The method prevents information leakage and offers strong security guarantees against malicious attacks, as demonstrated through evaluations on various datasets.
The content discusses the challenges of large-scale training data collection for gaze estimation methods and the associated privacy risks. It introduces PrivatEyes as a solution that combines federated learning and secure multi-party computation to enhance privacy without compromising accuracy or increasing computational costs. The method is evaluated on multiple datasets to demonstrate its effectiveness in maintaining privacy while achieving comparable performance to non-secure counterparts.
Key points include the importance of protecting personal gaze data, the introduction of PrivatEyes as a privacy-enhancing training approach, and the evaluation of its performance on different datasets. The method ensures individual gaze data remains private even in the presence of malicious servers, offering improved privacy guarantees compared to previous approaches.
Stats
Latest gaze estimation methods require large-scale training data.
PrivatEyes combines federated learning and secure multi-party computation.
Evaluations show improved privacy without compromising accuracy or increasing computational costs.