Core Concepts
Efficient PAC learning of low-degree polynomial threshold functions under nasty noise.
Abstract
The paper introduces a new algorithm for PAC learning of K-sparse degree-d PTFs, focusing on attribute efficiency. It addresses the challenge of noisy data and provides insights into structural results and algorithmic techniques for robust Chow vector estimation. The content covers the introduction, key results, main techniques, related works, and a detailed roadmap.
Introduction
Discusses the importance of low-degree PTFs in machine learning.
Main Results
Presents an efficient algorithm for learning sparse PTFs with dimension-independent noise tolerance.
Overview of Main Techniques
Structural result: Attribute sparsity induces sparse Chow vectors under Hermite polynomials basis.
Algorithmic result: Attribute-efficient robust Chow vector estimation.
Related Works
Explores research on attribute-efficient learning and noise-tolerant classification models.
Roadmap
Outlines notations, algorithms, performance guarantees, and conclusion sections.
Preliminaries
Defines vectors, matrices, probability spaces, and polynomials used in the analysis.
Performance Guarantees
Analyzes the SparseFilter algorithm's performance in filtering corrupted samples efficiently.
Termination Analysis
Examines the termination conditions and output quality when filtering is successful.
Stats
A nasty adversary EX(η) takes as input a sample size N requested by the learner...
There exists an inefficient algorithm that PAC learns Hd,K with near-optimal sample complexity O(Kd log n)...
Our main result is an attribute-efficient algorithm that runs in time (nd/ǫ)O(d)...