מושגי ליבה
Real-valued function classes with finite fat-shattering dimension are learnable in both realizable and agnostic settings under adversarial perturbations, with proper learning algorithms for convex function classes.
תקציר
The content presents a theoretical study of the robustness of real-valued predictors in the PAC learning model, with arbitrary perturbation sets. The key results are:
For robust regression with the ℓp loss, the authors provide a learning algorithm with sample complexity bounds that depend on the fat-shattering dimension of the function class. The algorithm is proper for convex function classes, circumventing a negative result for non-convex classes.
The authors introduce a novel technique for handling changing cutoff parameters across different points, which allows them to establish generalization from approximate interpolation. This leads to an improved sample complexity bound using a median boosting approach.
For robust (η, β)-regression in the realizable and agnostic settings, the authors provide learning algorithms with sample complexity bounds depending on the fat-shattering dimension. The realizable case uses a robust median boosting approach, while the agnostic case is reduced to the realizable one.
The main technical contributions include constructing adversarially robust sample compression schemes, deriving generalization from approximate interpolation with changing cutoffs, and developing new algorithms for real-valued robust learning.
סטטיסטיקה
The content does not provide any specific numerical data or statistics. It focuses on theoretical results and sample complexity bounds.